Skip to content

Commit

Permalink
Merge pull request #108 from ks6088ts-labs/feature/issue-107_promptfl…
Browse files Browse the repository at this point in the history
…ow-langchain

add flex flow with LangChain sample
  • Loading branch information
ks6088ts authored Aug 31, 2024
2 parents df00d51 + 5286387 commit 02c34a1
Show file tree
Hide file tree
Showing 5 changed files with 123 additions and 25 deletions.
86 changes: 61 additions & 25 deletions apps/11_promptflow/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,8 +36,42 @@ $ pip install -r requirements.txt
[Prompt flow > Quick start](https://microsoft.github.io/promptflow/how-to-guides/quick-start.html) provides a quick start guide to Prompt flow.
Some of the examples are extracted from [github.com/microsoft/promptflow/examples](https://github.com/microsoft/promptflow/tree/main/examples) to guide you through the basic usage of Prompt flow.

**Set up connection**

```shell
$ cd apps/11_promptflow

# List connections
$ pf connection list

# Set parameters
$ AZURE_OPENAI_KEY=<your_api_key>
$ AZURE_OPENAI_ENDPOINT=<your_api_endpoint>
$ CONNECTION_NAME=open_ai_connection

# Delete connection (if needed)
$ pf connection delete \
--name $CONNECTION_NAME

# Create connection
$ pf connection create \
--file connection_azure_openai.yaml \
--set api_key=$AZURE_OPENAI_KEY \
--set api_base=$AZURE_OPENAI_ENDPOINT \
--name $CONNECTION_NAME

# Show connection
$ pf connection show \
--name $CONNECTION_NAME
```

### [chat_minimal](https://github.com/microsoft/promptflow/tree/main/examples/flex-flows/chat-minimal)

A chat flow defined using function with minimal code. It demonstrates the minimal code to have a chat flow.

Tracing feature is available in Prompt flow, which allows you to trace the flow of the conversation. You can see its implementation in this example.
Details are available in [Tracing](https://microsoft.github.io/promptflow/how-to-guides/tracing/index.html)

**Run as normal Python script**

```shell
Expand Down Expand Up @@ -67,6 +101,8 @@ $ pf run create \
--stream
```

`--column-mapping` is used to map the data in the JSONL file to the flow. For more details, refer to [Use column mapping](https://microsoft.github.io/promptflow/how-to-guides/run-and-evaluate-a-flow/use-column-mapping.html).

### playground_chat

```shell
Expand All @@ -79,30 +115,6 @@ $ pf flow init \

$ cd playground_chat

# Set parameters
$ CONNECTION_NAME=open_ai_connection
$ AZURE_OPENAI_KEY=<your_api_key>
$ AZURE_OPENAI_ENDPOINT=<your_api_endpoint>

# List connections
$ pf connection list


# Delete connection (if needed)
$ pf connection delete \
--name $CONNECTION_NAME

# Create connection
$ pf connection create \
--file azure_openai.yaml \
--set api_key=$AZURE_OPENAI_KEY \
--set api_base=$AZURE_OPENAI_ENDPOINT \
--name $CONNECTION_NAME

# Show connection
$ pf connection show \
--name $CONNECTION_NAME

# Interact with chat flow
$ pf flow test \
--flow . \
Expand Down Expand Up @@ -230,14 +242,38 @@ $ pf run create \
$ pf run show-details --name $RUN_NAME
```

[Tutorial: How prompt flow helps on quality improvement](https://github.com/microsoft/promptflow/blob/main/examples/tutorials/flow-fine-tuning-evaluation/promptflow-quality-improvement.md) provides a detailed guide on how to use Prompt flow to improve the quality of your LLM applications.

### [eval-chat-math](https://github.com/microsoft/promptflow/tree/main/examples/flows/evaluation/eval-chat-math)

This example shows how to evaluate the answer of math questions, which can compare the output results with the standard answers numerically.
Details are available in the [eval-chat-math/README.md](./eval-chat-math/README.md).
To understand how to operate the flow in VS Code, you can refer to the [Build your high quality LLM apps with Prompt flow](https://www.youtube.com/watch?v=gcIe6nk2gA4).
This video shows how to evaluate the answer of math questions and guide you to tune the prompts using variants.

<!-- TODO: rag, tracing, deployments -->
### flex_flow_langchain

To guide you through working with LangChain, we provide an example flex flow that

```shell
$ cd apps/11_promptflow/flex_flow_langchain
$ pf flow test \
--flow main:LangChainRunner \
--inputs question="What's 2+2?" \
--init custom_connection=open_ai_connection

$ RUN_NAME=flex_flow_langchain-$(date +%s)
$ pf run create \
--name $RUN_NAME \
--flow . \
--data ./data.jsonl \
--column-mapping question='${data.question}' \
--stream

$ pf run show-details --name $RUN_NAME
```

<!-- TODO: rag, deployments -->

## References

Expand Down
2 changes: 2 additions & 0 deletions apps/11_promptflow/flex_flow_langchain/data.jsonl
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
{"question": "What's 4+4?"}
{"question": "What's 4x4?"}
8 changes: 8 additions & 0 deletions apps/11_promptflow/flex_flow_langchain/flow.flex.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
entry: main:LangChainRunner
sample:
inputs:
input: What's 2+2?
prediction: What's 2+2? That's an elementary question. The answer you're looking for is that two and two is four.
init:
custom_connection: open_ai_connection
52 changes: 52 additions & 0 deletions apps/11_promptflow/flex_flow_langchain/main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
from dataclasses import dataclass

from langchain_openai import AzureChatOpenAI
from promptflow.client import PFClient
from promptflow.connections import CustomConnection
from promptflow.tracing import trace


@dataclass
class Result:
answer: str


class LangChainRunner:
def __init__(self, custom_connection: CustomConnection):
# https://python.langchain.com/v0.2/docs/integrations/chat/azure_chat_openai/
self.llm = AzureChatOpenAI(
temperature=0,
api_key=custom_connection.secrets["api_key"],
api_version=custom_connection.configs["api_version"],
azure_endpoint=custom_connection.configs["api_base"],
model="gpt-4o",
)

@trace
def __call__(
self,
question: str,
) -> Result:
response = self.llm.invoke(
[
(
"system",
"You are asking me to do some math, I can help with that.",
),
("human", question),
],
)
return Result(answer=response.content)


if __name__ == "__main__":
from promptflow.tracing import start_trace

start_trace()
pf = PFClient()
connection = pf.connections.get(name="open_ai_connection")
runner = LangChainRunner(custom_connection=connection)
result = runner(
question="What's 2+2?",
)
print(result)

0 comments on commit 02c34a1

Please sign in to comment.