Skip to content

Commit

Permalink
Update dotprompt and eval doc to reflect menuSuggestionFlow theme (#145)
Browse files Browse the repository at this point in the history
  • Loading branch information
MaesterChestnut authored May 13, 2024
1 parent 38c3d6d commit a9e0180
Show file tree
Hide file tree
Showing 2 changed files with 18 additions and 14 deletions.
20 changes: 10 additions & 10 deletions docs/dotprompt.md
Original file line number Diff line number Diff line change
Expand Up @@ -157,31 +157,31 @@ You can set the format and output schema of a prompt to coerce into JSON:
model: vertexai/gemini-1.0-pro
input:
schema:
location: string
theme: string
output:
format: json
schema:
name: string
hitPoints: integer
description: string
price: integer
ingredients(array): string
---
Generate a tabletop RPG character that would be found in {{location}}.
Generate a menu item that could be found at a {{theme}} themed restaurant.
```

When generating a prompt with structured output, use the `output()` helper to
retrieve and validate it:

```ts
const characterPrompt = await prompt('create_character');
const createMenuPrompt = await prompt('create_menu');
const character = await characterPrompt.generate({
const menu = await createMenuPrompt.generate({
input: {
location: 'the beach',
theme: 'banana',
},
});
console.log(character.output());
console.log(menu.output());
```

## Multi-message prompts
Expand All @@ -201,8 +201,8 @@ input:
---
{{role "system"}}
You are a helpful AI assistant that really loves to talk about puppies. Try to work puppies
into all of your conversations.
You are a helpful AI assistant that really loves to talk about food. Try to work
food items into all of your conversations.
{{role "user"}}
{{userQuestion}}
```
Expand Down
12 changes: 8 additions & 4 deletions docs/evaluation.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,14 +34,18 @@ Note: The configuration above requires installing the `@genkit-ai/evaluator` and
Start by defining a set of inputs that you want to use as an input dataset called `testQuestions.json`. This input dataset represents the test cases you will use to generate output for evaluation.

```json
["How old is Bob?", "Where does Bob lives?", "Does Bob have any friends?"]
[
"What is on the menu?",
"Does the restaurant have clams?",
"What is the special of the day?"
]
```

You can then use the `eval:flow` command to evaluate your flow against the test
cases provided in `testQuestions.json`.

```posix-terminal
genkit eval:flow bobQA --input testQuestions.json
genkit eval:flow menuQA --input testQuestions.json
```

You can then see evaluation results in the Developer UI by running:
Expand All @@ -55,7 +59,7 @@ Then navigate to `localhost:4000/evaluate`.
Alternatively, you can provide an output file to inspect the output in a json file.

```posix-terminal
genkit eval:flow bobQA --input testQuestions.json --output eval-result.json
genkit eval:flow menuQA --input testQuestions.json --output eval-result.json
```

Note: Below you can see an example of how an LLM can help you generate the test
Expand Down Expand Up @@ -167,7 +171,7 @@ genkit eval:run customLabel_dataset.json
To output to a different location, use the `--output` flag.

```posix-terminal
genkit eval:flow bobQA --input testQuestions.json --output customLabel_evalresult.json
genkit eval:flow menuQA --input testQuestions.json --output customLabel_evalresult.json
```

To run on a subset of the configured evaluators, use the `--evaluators` flag and provide a comma separated list of evaluators by name:
Expand Down

0 comments on commit a9e0180

Please sign in to comment.