Skip to content

Commit

Permalink
Merge pull request #8 from manodyaSenevirathne/examples
Browse files Browse the repository at this point in the history
Add Examples
  • Loading branch information
NipunaRanasinghe authored Aug 15, 2024
2 parents 2e923c3 + 5e98aa8 commit bc7a075
Show file tree
Hide file tree
Showing 12 changed files with 310 additions and 9 deletions.
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,8 @@ bal run

The `OpenAI Chat` connector provides practical examples illustrating usage in various scenarios. Explore these [examples](https://github.com/module-ballerinax-openai.chat/tree/main/examples/), covering the following use cases:

[//]: # (TODO: Add examples)
1. [CLI assistant](https://github.com/ballerina-platform/module-ballerinax-openai.chat/tree/main/examples/CLI-assistant) - Execute the user's task description by generating and running the appropriate command in the command line interface of their selected operating system.
2. [Image to markdown document converter](https://github.com/ballerina-platform/module-ballerinax-openai.chat/tree/main/examples/image-to-markdown-converter) - Generate detailed markdown documentation based on the image content.

## Build from the source

Expand Down
3 changes: 2 additions & 1 deletion ballerina/Module.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,4 +82,5 @@ bal run

The `OpenAI Chat` connector provides practical examples illustrating usage in various scenarios. Explore these [examples](https://github.com/module-ballerinax-openai.chat/tree/main/examples/), covering the following use cases:

[//]: # (TODO: Add examples)
1. [CLI assistant](https://github.com/ballerina-platform/module-ballerinax-openai.chat/tree/main/examples/CLI-assistant) - Execute the user's task description by generating and running the appropriate command in the command line interface of their selected operating system.
2. [Image to markdown document converter](https://github.com/ballerina-platform/module-ballerinax-openai.chat/tree/main/examples/image-to-markdown-converter) - Generate detailed markdown documentation based on the image content.
3 changes: 2 additions & 1 deletion ballerina/Package.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,4 +82,5 @@ bal run

The `OpenAI Chat` connector provides practical examples illustrating usage in various scenarios. Explore these [examples](https://github.com/module-ballerinax-openai.chat/tree/main/examples/), covering the following use cases:

[//]: # (TODO: Add examples)
1. [CLI assistant](https://github.com/ballerina-platform/module-ballerinax-openai.chat/tree/main/examples/CLI-assistant) - Execute the user's task description by generating and running the appropriate command in the command line interface of their selected operating system.
2. [Image to markdown document converter](https://github.com/ballerina-platform/module-ballerinax-openai.chat/tree/main/examples/image-to-markdown-converter) - Generate detailed markdown documentation based on the image content.
1 change: 1 addition & 0 deletions examples/CLI-assistant/.github/README.md
8 changes: 8 additions & 0 deletions examples/CLI-assistant/Ballerina.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
[package]
org = "wso2"
name = "cli_assistant"
version = "0.1.0"
distribution = "2201.9.2"

[build-options]
observabilityIncluded = true
29 changes: 29 additions & 0 deletions examples/CLI-assistant/cli assistant.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
## Command line interface assistant

This CLI assistant is designed to simplify and automate terminal tasks for users who may not be familiar with command-line interfaces. By leveraging the OpenAI API v1's chat completion capabilities, the assistant helps users perform tasks by interpreting natural language descriptions and generating the appropriate commands based on their operating system.

## Prerequisites

1. Generate an API key as described in the [Setup guide](https://central.ballerina.io/ballerinax/openai.chat/latest#setup-guide).

2. For each example, create a `Config.toml` file the related configuration. Here's an example of how your `Config.toml` file should look:

```toml
token = "<API Key>"
```

## Running an example

Execute the following commands to build an example from the source:

* To build an example:

```bash
bal build
```

* To run an example:

```bash
bal run
```
127 changes: 127 additions & 0 deletions examples/CLI-assistant/main.bal
Original file line number Diff line number Diff line change
@@ -0,0 +1,127 @@
// Copyright (c) 2024, WSO2 LLC. (http://www.wso2.com).
//
// WSO2 LLC. licenses this file to you under the Apache License,
// Version 2.0 (the "License"); you may not use this file except
// in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.

import ballerina/io;
import ballerina/os;
import ballerinax/openai.chat;

configurable string token = ?;
final chat:Client openAIChat = check new ({auth: {token}});

public function main() returns error? {
string osType = getOSType();
string taskDescription = getTaskDescription();
string|error command = generateCLICommand(osType, taskDescription);

// if the command generation was unsuccessful, terminate the program
if command is error {
io:println(command);
return;
}
boolean shouldExecute = confirmExecution(command);

if shouldExecute {
io:println("Generated command: " + command);
check executeCommand(command, osType);
return;
}
io:println("Command execution aborted.");
}

function getOSType() returns string {
io:println("Select your operating system by pressing the corresponding number:\n1. Windows\n2. Linux\n3. macOS");
string osType;
string userInput = io:readln().trim();

match userInput {
"1" => {
osType = "windows";
}
"2" => {
osType = "linux";
}
"3" => {
osType = "mac";
}
_ => {
io:println("Invalid selection. Defaulting to 'linux'.");
osType = "linux";
}
}
return osType;
}

function getTaskDescription() returns string {
io:println("Please describe the task you want to perform:");
string taskDescription = io:readln().trim();
return taskDescription;
}

function generateCLICommand(string osType, string taskDescription) returns string|error {
chat:CreateChatCompletionRequest request = {
model: "gpt-4o-mini",
messages: [
{role: "system", content: string `Generate a terminal command for ${osType} to perform the following task.The response should be a direct command that can be run in the terminal without markdown.`},
{role: "user", content: taskDescription}
]
};

chat:CreateChatCompletionResponse response = check openAIChat->/chat/completions.post(request);

string? command = response.choices[0].message.content;
if command is () {
return error("Failed to generate a valid command.");
}
return command;
}

function confirmExecution(string command) returns boolean {
io:println("Generated command: " + command);
io:println("Do you want to execute this command? (y/n)");
string? userInput = io:readln().trim().toLowerAscii();
if userInput == "y" {
return true;
}
return false;
}

function executeCommand(string command, string osType) returns error? {
io:println("Executing command...");

string[] cmdArray;
if (osType == "windows") {
// Use PowerShell to execute the command on Windows
cmdArray = ["powershell", "-Command", command];
} else {
// For Linux/macOS, use /bin/sh to execute the command
cmdArray = ["/bin/sh", "-c", command];
}

os:Process exec = check os:exec({value: cmdArray[0], arguments: cmdArray.slice(1)});

int status = check exec.waitForExit();
if (status != 0) {
io:println(string `Process exited with status: ${status}. The command may have failed.`);
} else {
io:println("Process executed successfully.");
}

byte[] output = check exec.output(io:stdout);
if output.length() > 0 {
io:println("Output:");
io:print(check string:fromBytes(output));
}
}
17 changes: 11 additions & 6 deletions examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,20 @@

The `ballerinax/openai.chat` connector provides practical examples illustrating usage in various scenarios. Explore these [examples](https://github.com/ballerina-platform/module-ballerinax-openai.chat/tree/main/examples), covering use cases like cache management, session management, and rate limiting.

[//]: # (TODO: Add examples)
1.
2.
1. [CLI assistant](https://github.com/ballerina-platform/module-ballerinax-openai.chat/tree/main/examples/CLI-assistant) - Execute the user's task description by generating and running the appropriate command in the command line interface of their selected operating system.
2. [Image to markdown document converter](https://github.com/ballerina-platform/module-ballerinax-openai.chat/tree/main/examples/image-to-markdown-converter) - Generate detailed markdown documentation based on the image content.

## Prerequisites

[//]: # (TODO: Add prerequisites)
1. Generate an API key as described in the [Setup guide](https://central.ballerina.io/ballerinax/openai.chat/latest#setup-guide).

## Running an Example
2. For each example, create a `Config.toml` file the related configuration. Here's an example of how your `Config.toml` file should look:

```toml
token = "<API Key>"
```

## Running an example

Execute the following commands to build an example from the source:

Expand All @@ -26,7 +31,7 @@ Execute the following commands to build an example from the source:
bal run
```

## Building the Examples with the Local Module
## Building the examples with the local module

**Warning**: Due to the absence of support for reading local repositories for single Ballerina files, the Bala of the module is manually written to the central repository as a workaround. Consequently, the bash script may modify your local Ballerina repositories.

Expand Down
1 change: 1 addition & 0 deletions examples/image-to-markdown-converter/.github/README.md
8 changes: 8 additions & 0 deletions examples/image-to-markdown-converter/Ballerina.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
[package]
org = "wso2"
name = "image_to_markdown_converter"
version = "0.1.0"
distribution = "2201.9.2"

[build-options]
observabilityIncluded = true
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
## Image to markdown document converter

This use case demonstrates how openai API v1 can be utilized to generate markdown documentation from visual content. It starts by converting an image file—such as a whiteboard sketch or diagram—into a base64-encoded string. Using the OpenAI API v1's vision capabilities it can generate a markdown document that includes detailed descriptions of the image's contents, such as diagrams, notes, or code snippets. The resulting markdown is structured with appropriate headings and summaries, and saved as a `.md` file in the same directory as the original image.

## Prerequisites

1. Generate an API key as described in the [Setup guide](https://central.ballerina.io/ballerinax/openai.chat/latest#setup-guide).

2. For each example, create a `Config.toml` file the related configuration. Here's an example of how your `Config.toml` file should look:

```toml
token = "<API Key>"
```

## Running an example

Execute the following commands to build an example from the source:

* To build an example:

```bash
bal build
```

* To run an example:

```bash
bal run
```
90 changes: 90 additions & 0 deletions examples/image-to-markdown-converter/main.bal
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
// Copyright (c) 2024, WSO2 LLC. (http://www.wso2.com).
//
// WSO2 LLC. licenses this file to you under the Apache License,
// Version 2.0 (the "License"); you may not use this file except
// in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.

import ballerina/file;
import ballerina/io;
import ballerinax/openai.chat;

configurable string token = ?;
final chat:Client openAIChat = check new ({auth: {token}});

public function main() returns error? {
string imagePath = getImageFilePath();
string|error base64Image = encodeImageToBase64(imagePath);

if base64Image is error {
io:println(base64Image);
return;
}
string|error markdownDoc = generateDocumentation(base64Image);

if markdownDoc is error {
io:println(markdownDoc);
return;
}
check saveMarkdownToFile(markdownDoc, imagePath);
io:println("Markdown documentation generated and saved successfully.");
}

function getImageFilePath() returns string {
io:println("Enter the path to the image file:");
return io:readln().trim();
}

function encodeImageToBase64(string imagePath) returns string|error {
byte[] imageBytes = check io:fileReadBytes(imagePath);
return imageBytes.toBase64();
}

function generateDocumentation(string base64Image) returns string|error {
string prompt = "Generate markdown documentation based on the content of the following image. Include detailed descriptions of any diagrams, notes, or code snippets present. Structure the documentation with appropriate headings, and provide a summary of the key concepts discussed. Additionally, include any relevant annotations or comments that might aid in understanding the content";

chat:CreateChatCompletionRequest request = {
model: "gpt-4o-mini",
messages: [
{
role: "user",
content: [
{
"type": "text",
"text": prompt
},
{
"type": "image_url",
"image_url": {
"url": string `data:image/jpeg;base64,${base64Image}`
}
}
]
}
],
max_tokens: 300
};
chat:CreateChatCompletionResponse response = check openAIChat->/chat/completions.post(request);
string? markdownDoc = response.choices[0].message.content;

if markdownDoc is () {
return error("Failed to generate markdown documentation.");
}
return markdownDoc;
}

function saveMarkdownToFile(string markdownDoc, string imagePath) returns error? {
string imageName = check file:basename(imagePath);
string parentPath = check file:parentPath(imagePath);
string docName = string `${re `\.`.split(imageName)[0]}_documentation.md`;
check io:fileWriteBytes(parentPath + "/" + docName, markdownDoc.toBytes());
}

0 comments on commit bc7a075

Please sign in to comment.