forked from awsdocs/aws-doc-sdk-examples
-
Notifications
You must be signed in to change notification settings - Fork 1
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Workflow (Java, Python, Rust): SESv2 Weekly Mailer Prompts
These are the prompts used to develop the SESv2 Weekly Mailer workflow, for reference, learning, and workshop materials.
- Loading branch information
1 parent
208abff
commit 5c5734f
Showing
112 changed files
with
11,868 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,51 @@ | ||
--- | ||
combined: true | ||
overwrite: false | ||
--- | ||
|
||
TCX SDK Code Examples | ||
The TCX SDK Code Examples team produces code examples that demonstrate how to automate AWS services to accomplish key user stories for developers and programmers. These code examples are quick and easy to find and use, are continually tested, and demonstrate AWS and community coding best practices. | ||
|
||
Mission | ||
|
||
We provide code examples for builders integrating AWS services into their applications and business workflows using the AWS Software Development Kits (SDKs). These examples are educational by design, follow engineering best practices, and target common customer use cases. Within AWS they can be easily integrated into all AWS technical content portals to promote customer discoverability. | ||
|
||
Vision | ||
|
||
We envision a best-in-class library of code examples for every AWS service and in every actively maintained SDK language. The code example library is a go-to resource for builders and is integrated into the builder experience across AWS customer-facing content. Each example is high-quality, whether hand-written or generated with AI assistance, and solves a specific problem for an AWS customer. | ||
|
||
Tenets | ||
|
||
These are our tenets, unless you know better ones: | ||
|
||
We are educators. Comprehension and learnability always take precedence. | ||
We are engineers. Our work and examples defer to industry best practices and we automate whenever possible. | ||
Our examples address common user challenges. They do not deliberately mirror AWS service silos. | ||
Our examples are discoverable. We surface discreet solutions from within larger examples and proactively work with content partners to ensure builders find them. | ||
We are subject matter experts. We are the primary reference for code example standards in TCX. | ||
|
||
A Workflow Example, as defined by the TCX Code Examples team, is an example scenario that is targeted to a particular real-world user story, use case, problem, or other common service integration. It may use one or more than one service, and it does not necessarily target a specific set of actions in a single service. Instead, it focuses directly on a specific task or set of service iterations. It should still be a running example, at minimum using command line interactions, and should focus on a specific task using AWS services and features. | ||
|
||
Ailly - AI Writing Ally | ||
|
||
Load your writing. Train Ailly on your voice. Write your outline. Prompt Ailly to continue to continue the writing. Edit its output, and retrain to get it even more like that. | ||
|
||
Rhymes with Daily. | ||
|
||
Conversational History | ||
|
||
In LLM Chatbots like ChatGPT or chains like Langchain, the history of the conversation is kept in the sequence of human, assistant interactions. This is typically kept in memory, or at least in an inaccessible format to the user. The user can only regenerate sequences, or add their next prompt at the end. | ||
|
||
Ailly removes this limitation by using your file system as the conversational history. The writer maintains full control over the sequence of prompts, up to and including editing the LLM's response before (re)generating the next prompt! This lets the writer decide how the conversation should evolve. By editing an LLM prompt, they can take the best of what the LLM did in some cases, and modify it in others. Using Ailly's filesystem based conversational history, each piece of the session can be stored in source control. Version tracking lets the author see how their prompts and responses have changed over time, and unlock a number of long-term process improvements that are difficult to impossible with chat interfaces. | ||
|
||
In one session, a developer was working on a long sequence of prompts to build a software project. While reviewing an LLM written draft of the README, the developer wanted the list of API calls to be links to the reference documentation. With a chat conversational history, the developer would have needed to modify the instructions for the entire prompt to encourage creating the list, rerun the generation, and hope the rest of the README came out similarly. Instead, with Ailly, the developer created a new file with only the list and an instruction on how to create URLs from the list items, saved it as list.md (with isolated: true in the combined head), and ran ailly list.md. The LLM followed the instructions, generated just the updated list, and the developer copied that list into the original (generated) README.md. In later prompts, the context window included the entire URLs, and the agent model was able to intelligently request to download their contents. | ||
|
||
To the author's knowledge, no other LLM interface provides this level of interaction with LLMs. | ||
|
||
Properties | ||
|
||
These properties can be set in a combination of places, includeing the command line, .aillyrc, and greymatter. Later settings override earlier. | ||
|
||
combined boolean If true, the file's body is the response and the prompt is in the greymatter key prompt. If false, the file's body is the prompt and the response is in {file_name}.ailly.md. Default false. | ||
skip boolean If true, the prompt will not be sent through the LLM (but it will be part of the context). | ||
isolated boolean If true, the LLM inference will only include the system prompt, and not the prior context in this folder. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,285 @@ | ||
--- | ||
skip: True | ||
prompt: Write instructions on how to do Workflow Scouting with Ailly. | ||
--- | ||
|
||
# How to Scout with Ailly | ||
|
||
## Prereq: Install and run Ailly | ||
|
||
1. On bash, easiest way is probably an alias: `alias ailly=npx @ailly/cli` | ||
- which will probably need `npm install -g @ailly/cli` | ||
- and can then be updated with `npm install -g @ailly/cli@{version}` | ||
1. Choose your engine, probably bedrock. | ||
- Export an environment variable: `export AILLY_ENGINE=bedrock`. | ||
- Ensure you're using your AWS account (`ada` or copying the access keys from isengard) | ||
|
||
## Readme | ||
|
||
The first step is getting a feel for Ailly and working with it. We'll have it make a README for the workflow, edit the README ourselves for fine detailing, and use it in "assist" or "tmp" or "macro" mode (I need a good name for this) to do bulk editing tasks. | ||
|
||
1. WITHOUT using an LLM, do the Workflow process. Meet a SME, develop a high level plan for what the workflow should do. | ||
2. Create a new workspace for your scout in the `workflow` folder. | ||
3. Create a new folder for doing Ailly work, `workflow/scout_name/content`, and cd into it. | ||
4. Add a `.aillyrc` file with: | ||
- A greymatter head: | ||
``` | ||
--- | ||
combined: true | ||
--- | ||
``` | ||
- A level-setting prompt (Coming soon!) to guide output format. | ||
- The summary of the Code Examples team. | ||
- A summary of the service you're writing the workflow for. | ||
- The summary of the workflow. | ||
- A list of the API calls you expect to make. | ||
5. Add a file, `01_README.md`, with a greymatter head: | ||
``` | ||
--- | ||
prompt: Create a README.md for this project | ||
--- | ||
``` | ||
- at any (and every?) point in this process, experiment with your own prompting! | ||
6. Run ailly: `ailly 10_README.md` | ||
7. Review the output. Correct it as desired. | ||
- If there's a thing you want to change, you can use Ailly like a macro | ||
- Let's say Ailly creates a list of the API calls to use, but doesn't include links | ||
- Create a folder, `tmp` | ||
- Create an aillyrc, `tmp/.aillyrc`, with | ||
``` | ||
--- | ||
isolated: true | ||
--- | ||
``` | ||
- Create a file, `tmp/links.md` | ||
- Add this head: | ||
``` | ||
--- | ||
prompt: | | ||
Reformat this list into links to the documentation. For instance, the SendEmail item should link to https://docs.aws.amazon.com/ses/latest/APIReference-V2/API_SendEmail.html. | ||
SendEmail | ||
CreateEmailIdentity | ||
[paste the rest of your items] | ||
--- | ||
``` | ||
- Run Ailly: `ailly tmp/links.md` | ||
- Review the output, which should now have | ||
``` | ||
* [SendEmail](https://docs.aws.amazon.com/ses/latest/APIReference-V2/API_SendEmail.html) | ||
* [CreateEmailIdentity](https://docs.aws.amazon.com/ses/latest/APIReference-V2/API_CreateEmailIdentity.html) | ||
* [The rest of your links] | ||
``` | ||
- If this doesn't work the first time, rerun and see if it helps. | ||
- If it doesn't a second time, play with the prompt until it does what you want. | ||
- Paste the output back into the `01_README.md` document. | ||
## First Spec | ||
At this point, you should have a good start on the README, which we can use to have Ailly write the spec. From current experiments, LLM generation works best with ~500 word output "chunks", so we start planning our Ailly calls around that. It also works best having additional context, so we'll get that first. | ||
1. Help Ailly get API docs for the service. | ||
- Create a new file, `tmp/get_api.md` | ||
- Add a prompt: | ||
``` | ||
--- | ||
prompt: | | ||
Write a (bash or powershell or batch (or python?)) script to download the API docs for each API used in this workflow. | ||
An API doc link looks like https://docs.aws.amazon.com/ses/latest/APIReference-V2/API_SendEmail.html | ||
After downloading, pass the HTML files first through `pup` to select '#main-content', then use `html2text -nobs -utf8` to get just the main text of the document. | ||
Put the downloaded text in a file, 10_{ApiName}.md, with a greymatter header that has a property `skip: true`. | ||
For example, `echo "---\nskip: true\n---" > 10_SendEmail.md\ncurl https://docs.aws.amazon.com/ses/latest/APIReference-V2/API_SendEmail.html | pup "#main-content" | html2text -nobs -utf8 >> 10_SendEmail.md` | ||
--- | ||
``` | ||
- This kind of prompt can take a bit of work to get right. In my experience, I spend a couple minutes digging into structure of the document, then use the command line tools like `pup`, `jq`, `yq`, `html2text`, `pandoc` | ||
- It might be faster to just copy/paste it a handful of times, but that's not as fun | ||
- Why spend 2 minutes doing repetitive tasks when we could spend 15 minutes learning how to make the LLM do the repetitive task for us? | ||
- To run the commands it generates, I just copy/paste them into my terminal. #yolo | ||
- Real #yolo will come maybe next quarter when I add a tools module to ailly to give it direct shell access | ||
- However you do it, make sure you end up with a few `10_API.md` files in your `content/` folder. These will be used by the remaining prompts. | ||
2. Create the first part of the spec, `30_SPECIFICATION.md` (I left the 20s available for future use...) | ||
- Contents: | ||
``` | ||
--- | ||
prompt: | | ||
Write a specification for this workflow. | ||
[Lots of instructions for how you want it to handle the workflow!] | ||
[How should it handle input? Prompt, or variables? Team says prompt...] | ||
[Share what works and what doesn't! We can build a library of these!] | ||
[Example:] | ||
The specification is independent of any programming language, and should enable any programmer competent with programming using any published AWS SDK to follow along. It must specify the API calls to make, and it must include the parameters to send. It should describe the parameters in a list format. Implementations will use the specific SDKs, so it does not need to specify URL calls to make, only the API calls and the request parameters to include. It must specify the exact environment variable names and files to use when referring to runtime data. | ||
--- | ||
``` | ||
- Run Ailly! `ailly 30_specification.md` | ||
- As we keep specifying the file name, Ailly will load the entire directory before this file as context, but only generate this file as output. | ||
- This output will, keeping with the "500 words", probably be pretty close to the README but a bit more formal. That's OK, we're going to build from here. | ||
- Iterate the prompt until you get a result you like. Keep copies or git commits, as you prefer, if you want. | ||
- When you have one you like, edit it a bit. | ||
3. Create specific parts of the spec. If the workflow has five "parts", create `31_PART_1.md`, `32_PART_2.md`, etc. (Recommend replacing `PART_1` with the short name of the part or step.) | ||
- For each, this is the prompt I started with: | ||
``` | ||
--- | ||
prompt: | | ||
Describe the exact SESv2 API calls and parameters for this step. | ||
## Prepare the Application | ||
--- | ||
``` | ||
- This prompt kinda sucked, TBH, but it got the job done. Lots of room for improvement here. | ||
- Run Ailly: `ailly 31_PART_1.md` (or `ailly 3{1,2,3}*.md`). | ||
- Iterate! | ||
4. If the spec needs sample files, have Ailly & Claude make them! | ||
- `50_SAMPLE_FILES.md` | ||
``` | ||
--- | ||
prompt: | | ||
List and describe the sample files that this workflow will need at runtime. | ||
--- | ||
``` | ||
- `51_SAMPLE_FILE_A.md` | ||
``` | ||
--- | ||
prompt: | | ||
Create [Sample File A] | ||
--- | ||
``` | ||
- etc | ||
5. Consolidate these files | ||
- Maybe in the future Ailly can have file system access and know how to issue instructions to combine the various in-progess files, but for now I just open the handful of files and copy/paste from `content/01_README.md` to `README.md`. | ||
- Open a PR and review the workflow spec | ||
# First Language | ||
## Structure | ||
1. Create a folder for the language, say, `python`, and add an `.aillyrc`: | ||
``` | ||
You are a Python programmer, using Python 3.9. | ||
|
||
<examples> | ||
<example type="main with argument handling"> | ||
...python... | ||
</example> | ||
|
||
<example type="input request"> | ||
...python... | ||
</example> | ||
|
||
<example type="error handling"> | ||
...python... | ||
</example> | ||
|
||
<example type="sdk call"> | ||
...python... | ||
</example> | ||
|
||
<example type="pagination"> | ||
...python... | ||
</example> | ||
</examples> | ||
|
||
|
||
Imports should be sorted. [Additional notes as necessary] | ||
``` | ||
You may or may not include snippets for these, but they should help. If there are other examples you find help a lot, let the team know! | ||
Anthropic claims [claude does best with XML tags](https://docs.anthropic.com/claude/docs/long-context-window-tips)? But I've seen it do fine with markdown? So I'm playing with both, preferring Markdown for quick self contained things, and XML for longer/larger/more detailed examples. | ||
- When running ailly, run it from the folder with the original `.aillyrc` - `ailly python/20_PLAN.md`. | ||
1. Copy the current files. Ailly uses `.aillyrc` files going up, but only includes files in the current folder for the current context. | ||
- Copy the README.md and SPECIFICATION.md consolidated files to `01_README.md` and `02_SPECIFICATION.md`. | ||
1. Find the language-specific API documentation, and put it into `10_[API_CALL].md`. | ||
- Add greymatter to each with `skip: true` (don't want to be regenerating these.) | ||
1. Create a `20_PLAN.md`. Your plan should have a prompt asking for the implementation files and class & method stubs, but not full implementations | ||
- Generate the plan. This might take a few passes. Maybe add some examples for how you want it structured. Eventually you should come out with an outline for the project you're satisfied with. | ||
- You can ask it to make the plan in the form of a shell script that would create the files it wants - this can be helpful if you know you'll need a few files for different parts. | ||
- Maybe create `21b_PLAN_SUPPLEMENTAL.md` with a `skip: true` and prompt: "When implementing methods, only implement the method you're currently instructed to implement." There may be other supplemental details as well. | ||
- The important part here is to remind you you can slip in additional details for all downstream instructions in a few different ways. | ||
- When running ailly, run it from the folder with the original `.aillyrc` - `ailly python/20_PLAN.md`. | ||
1. Create `21{a,b,c}_PLAN_DETAILS.md` | ||
- Write a prompt that instructs the model to write itself prompts: | ||
``` | ||
Based on the plan, create a human prompt for each method in the SES2Mailer class, as well as the main method. | ||
Format your output as a shell heredoc cat that writes the prompt into a markdown file. | ||
The markdown should have yaml greymatter with two properties - `skip: true` and `prompt: ` with the content you generate. | ||
The prompt will be used as the `human` side of an LLM conversation. | ||
The file should get written to `51_{function_name}.md`. | ||
Start with these methods: | ||
- def create_email_identity(self): | ||
- def create_contact_list(self, contact_list_name): | ||
[... etc, copied from the plan] | ||
``` | ||
- This is the first "model writing for the model" step. | ||
- After running, you should get a shell script that will make these files. | ||
- You can also make the files yourselves | ||
1. Create and generate the `51_{function name}.md` files from the PLAN_DETAILS step. | ||
1. Repeat for testing (or did you proactively add testing in the original plan? Nice!) | ||
1. Consolidate everything you have at this point to a new project. Maybe have Ailly write a script that does it for you? | ||
## Run and Test? | ||
1. Create a new file, and paste the generated code in. | ||
1. Use IDE tooling to fix type issues; run it, test it, etc. | ||
1. Copy your edits back into the output, and add `skip: true` to lock them in. | ||
1. Iterate. If your plan had 4 functions, but you wanted 5, edit the plan and rerun the steps after the plan. | ||
1. If your main function did some special set up, edit it as well. | ||
1. Edit the "first" function with your patterns and best practices. This will 'level set' future answers for running after this step. | ||
1. | ||
# Second Language | ||
1. Copy all your work from the first language to a second language. | ||
1. Keep `skip: true` on README, Spec, and other language independent files. | ||
1. Redo the API lookup and extraction. | ||
1. Replace all first language- prompt words with the second - eg "write python to" with "write java to" | ||
1. Rerun for the second language. | ||
1. Re-iterate for the Run and Test steps. | ||
# Metadata | ||
1. Provide examples of metadata? | ||
1. Provide metadata docs and instruction? | ||
1. Ask for a patch to add snippet tags? | ||
- This didn't actually work - the patch rarely applies cleanly, and the locations aren't what I'd want. | ||
# PLAN | ||
- **Prepare** a precise prompt (by writing an aillyrc system prompt, providing supporting documents, and giving individual prompt steps). | ||
- **Leverage** LLM models (by running Ailly on some or all parts of the context chain). | ||
- **Assess** the generated content (as Ailly and the LLM writes output, make sure it's on the right track). | ||
- **Narrow** your context (by editing Ailly's generated content to keep the conversation going where you want it to). |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,19 @@ | ||
--- | ||
combined: true | ||
debug: | ||
engine: bedrock | ||
finish: end_turn | ||
id: null | ||
model: anthropic.claude-3-sonnet-20240229-v1:0 | ||
usage: null | ||
isolated: false | ||
prompt: > | ||
Create a backronym using P L A N for iterating with Ailly. Be direct and | ||
professional, not poetic or whimsical. This backronym will summarize Ailly's | ||
project and workflow, so it needs to be approachable marketing copy. Do not include "Ailly" in any item. | ||
Prefer concrete actions, or nouns. | ||
--- | ||
|
||
I have thought about these instructions. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,20 @@ | ||
--- | ||
combined: true | ||
debug: | ||
engine: bedrock | ||
finish: end_turn | ||
id: null | ||
model: anthropic.claude-3-sonnet-20240229-v1:0 | ||
usage: null | ||
isolated: false | ||
skip: true | ||
prompt: Provide three alternatives for the P letter. | ||
--- | ||
|
||
Here are three alternatives for the P in the P.L.A.N. backronym for iterating with Ailly: | ||
|
||
1. Prepare a precise prompt | ||
2. Pose a purposeful prompt | ||
3. Propel your project with prompts | ||
|
||
Prepare a purposeful prompt |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,19 @@ | ||
--- | ||
combined: true | ||
debug: | ||
engine: bedrock | ||
finish: end_turn | ||
id: null | ||
model: anthropic.claude-3-sonnet-20240229-v1:0 | ||
usage: null | ||
isolated: false | ||
prompt: Provide three alternatives for the L letter. | ||
--- | ||
|
||
Here are three alternatives for the L in the P.L.A.N. backronym: | ||
|
||
1. Leverage language models | ||
2. Let the model learn | ||
3. Loop through iterations | ||
|
||
Leverage language models |
Oops, something went wrong.