Skip to content

Commit

Permalink
updated README and zzz
Browse files Browse the repository at this point in the history
  • Loading branch information
CorradoLanera committed Apr 21, 2024
1 parent 69701c1 commit 589c383
Show file tree
Hide file tree
Showing 16 changed files with 584 additions and 126 deletions.
1 change: 1 addition & 0 deletions NAMESPACE
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ export(compose_prompt)
export(compose_prompt_api)
export(compose_prompt_system)
export(compose_prompt_user)
export(create_usr_data_prompter)
export(get_completion_from_messages)
export(get_content)
export(get_tokens)
Expand Down
2 changes: 2 additions & 0 deletions NEWS.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
# ubep.gpt (development version)

* Added zzz.R with startup messages checking for API keys.
* Update README with examples of usage.
* Setup development environment.
* Initial setup from CorradoLanera/gpt-template.
62 changes: 54 additions & 8 deletions R/compose_prompt.R
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
#' Create a prompt to ChatGPT
#'
#' Questa funzione è un semplice wrapper per comporre un buon prompt per
#' ChatGPT. L'output non è altro che la giustapposizione su righe separate delle
#' varie componenti (con il testo addizionale racchiuso tra i delimitatori in
#' fondo al prompt). Dunque il suo utilizzo è più che altro focalizzato è utile
#' per ricordare e prendere l'abitudine di inserire le componenti utili per un
#' buon prompt.
#' This function is a simple wrapper to compose a good prompt for
#' ChatGPT. The output is nothing more than the juxtaposition on
#' separate lines of the various components (with the additional text
#' enclosed between the delimiters at the bottom of the prompt). So its
#' use is more focused and useful for remembering and getting used to
#' entering the components useful for a good prompt.
#'
#' @param role (chr) The role that ChatGPT should play
#' @param context (chr) The context behind the task required
Expand All @@ -15,8 +15,8 @@
#' @param style (chr) The style ChatGPT should use in the output
#' @param examples (chr) Some examples of correct output
#' @param text (chr) Additional text to embed in the prompt
#' @param delimiter (chr) delimiters for the `text` to embed, a sequence of
#' three identical symbols is suggested
#' @param delimiter (chr) delimiters for the `text` to embed, a sequence
#' of three identical symbols is suggested
#'
#' @return (chr) the glue of all the prompts components
#' @export
Expand Down Expand Up @@ -116,6 +116,52 @@ compose_prompt_user <- function(
}


#' Create a function to prompt the user for data
#'
#' This function create a function that can be used to prompt the user
#' for data in a specific context. Given the interested context, the
#' function created will accept a string of text as input and return the
#' complete prompt based on the desired context.
#'
#' @param task (chr) The task ChatGPT should assess
#' @param instructions (chr) Description of steps ChatGPT should follow
#' @param output (chr) The type/kind of output required
#' @param style (chr) The style ChatGPT should use in the output
#' @param examples (chr) Some examples of correct output
#'
#' @return (function) a function that can be used to prompt the user,
#' accepting a string of text as input and returning the complete
#' prompt based on the desired context.
#'
#' @export
#'
#' @examples
#' prompter <- create_usr_data_prompter(
#' task = "Your task is to extract information from a text provided.",
#' instructions = "
#' You should extract the first and last words of the text.",
#' output = "
#' Return the first and last words of the text separated by a dash,
#' i.e., `first - last`.",
#' style = "
#' Do not add any additional information, return only the requested
#' information.",
#' examples = "
#' text: 'This is an example text.'
#' output: 'This - text'
#' text: 'Another example text!!!'
#' output: 'Another - text'"
#' )
#' prompter("This is an example text.")
#' prompter("Another example text!!!")
#'
#' # You can also use it with a data frame to programmaically create
#' # prompts for each row of a data frame's column.
#' db <- data.frame(
#' text = c("This is an example text.", "Another example text!!!")
#' )
#' db$text |> purrr::map_chr(prompter)
#'
create_usr_data_prompter <- function(
task = "", instructions = "", output = "", style = "", examples = ""
) {
Expand Down
28 changes: 21 additions & 7 deletions R/get_completion_from_messages.R
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
#' @param max_tokens (dbl, default = 500) a value greater than 0. The
#' maximum number of tokens to generate in the chat completion. (see:
#' https://platform.openai.com/docs/api-reference/chat/create#chat/create-max_tokens)
#' @param quiet (lgl, default = FALSE) whether to suppress messages
#'
#' @details For argument description, please refer to the [official
#' documentation](https://platform.openai.com/docs/api-reference/chat/create).
Expand Down Expand Up @@ -76,7 +77,8 @@ get_completion_from_messages <- function(
messages,
model = c("gpt-3.5-turbo", "gpt-4-turbo"),
temperature = 0,
max_tokens = 1000
max_tokens = 1000,
quiet = FALSE
) {

model <- match.arg(model)
Expand All @@ -85,7 +87,14 @@ get_completion_from_messages <- function(
"gpt-4-turbo" = "gpt-4-1106-preview"
)

res <- openai::create_chat_completion(
get_chat_completion <- if (quiet) {
\(...) openai::create_chat_completion(...) |>
suppressMessages()
} else {
openai::create_chat_completion
}

res <- get_chat_completion(
model = model,
messages = messages,
temperature = temperature,
Expand Down Expand Up @@ -114,17 +123,22 @@ get_content <- function(completion) {
#'
#' @param completion the number of tokens used for output of a
#' `get_completion_from_messages` call
#' @param what (chr) one of "total" (default), "prompt", or "completion"
#' @param what (chr) one of "total" (default), "prompt", "completion",
#' or "all"
#' @describeIn get_completion_from_messages
#'
#' @return (int) number of token used in completion for prompt or completion part, or overall (total)
#' @return (int) number of token used in completion for prompt or
#' completion part, or overall (total)
#' @export
get_tokens <- function(
completion,
what = c("total", "prompt", "completion")
what = c("total", "prompt", "completion", "all")
) {
what <- match.arg(what)
sel <- paste0(what, "_tokens")

completion[["tokens"]][[sel]]
if (what == "all") {
completion[["tokens"]] |> unlist()
} else {
completion[["tokens"]][[paste0(what, "_tokens")]]
}
}
3 changes: 2 additions & 1 deletion R/query_gpt.R
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,8 @@ query_gpt <- function(
get_completion_from_messages(
model = model,
temperature = temperature,
max_tokens = max_tokens
max_tokens = max_tokens,
quiet = quiet
)
done <- TRUE
aux
Expand Down
39 changes: 39 additions & 0 deletions R/zzz.R
Original file line number Diff line number Diff line change
@@ -1,3 +1,42 @@
.onAttach <- function(...) {
usethis::ui_info("Wellcome to ubep.gpt!")

if (Sys.getenv("OPENAI_API_sKEY") == "") {
usethis::ui_info("OPENAI_API_KEY environment variable is not set.")
usethis::ui_info(
"it is required to use OpenAI APIs with `ubep.gpt`.")
usethis::ui_info(
"To set the OPENAI_API_KEY environment variable,
you can call {usethis::ui_code('usethis::edit_r_environ(\"project\")')},
and add the line {usethis::ui_code('OPENAI_API_KEY=<your_api_key>')}."
)
usethis::ui_info(
"REMIND:
Never share your API key with others.
Keep it safe and secure.
If you need an API key, you can generate it in the OpenAI-API website
(https://platform.openai.com/api-keys).
Remind to assign it to the correct project
(i.e., NOT to the 'default' one).
If you need to be added to the organization and/or to a project,
please, contact your project's referent."
)

usethis::ui_todo(
"Please, set the OPENAI_API_KEY environment variable with your OpenAI API key."
)
usethis::ui_todo("And than, restart your R session.")
} else {
usethis::ui_info("The OPENAI_API_KEY environment variable is set")
usethis::ui_info("You are ready to use the package `ubep.gpt`.")
usethis::ui_todo("Just, double check if the key is the correct one.")
usethis::ui_info(
"REMIND: Never share your API key with others.
Keep it safe and secure.
If you think that your API key was compromised,
you can regenerate it in the OpenAI-API website
(https://platform.openai.com/api-keys)."
)
usethis::ui_done("Enjoy the package!")
}
}
141 changes: 117 additions & 24 deletions README.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -21,49 +21,118 @@ knitr::opts_chunk$set(
[![R-CMD-check](https://github.com/UBESP-DCTV/ubep.gpt/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/UBESP-DCTV/ubep.gpt/actions/workflows/R-CMD-check.yaml)
<!-- badges: end -->

The goal of ubep.gpt is to ...
The goal of `{ubep.gpt}` is to provide a simple interface to OpenAI's GPT API. The package is designed to work with dataframes/tibbles and to simplify the process of querying the API.

## Installation

You can install the development version of ubep.gpt like so:
You can install the development version of `{ubep.gpt}` like so:

``` r
# FILL THIS IN! HOW CAN PEOPLE INSTALL YOUR DEV PACKAGE?
remotes::install_github("UBESP-DCTV/ubep.gpt")
```

## Example
## Basic example

This is a basic example which shows you how to solve a common problem:
You can use the `query_gpt` function to query the GPT API. You can decide
if use GPT-3.5-turbo or GPT-4-turbo models. This function is useful because mainly it iterate the query a decided number of times (10 by default) in case of error (often caused by server overload).

```{r example}
To use the function you need to compose a prompt. You can use the `compose_prompt_api` function to compose the prompt. This function is useful because it helps you to compose the prompt automatically adopting the required API's structure.

Once you have queried the API, you can extract the content of the response using the `get_content` function. You can also extract the tokens of the prompt and the response using the `get_tokens` function.

```{r}
library(ubep.gpt)
prompt <- compose_prompt_api(
sys_msg = "You are the assistant of a university professor.",
usr_msg = "Tell me about the last course you provided."
)
prompt
res <- query_gpt(
prompt = prompt,
model = "gpt-3.5-turbo",
quiet = FALSE, # default TRUE
max_try = 2, # default 10
temperature = 1.5, # default 0 [0-2]
max_tokens = 100 # default 1000
)
str(res)
get_content(res)
get_tokens(res)
get_tokens(res, "prompt")
get_tokens(res, "all")
```

## Easy prompt-assisted creation

You can use the `compose_prompt_system` and `compose_prompt_user` functions to create the system and user prompts, respectively. These functions are useful because they help you to compose the prompts following best practices in composing prompt. In fact the arguments are just the main components every prompt should have. They do just that, composing the prompt for you juxtaposing the components in the right order.

```{r}
sys_prompt <- compose_prompt_system(
role = "You are the assistant of a university professor.",
context = "You are analyzing the comments of the students of the last course."
)
sys_prompt
usr_prompt <- compose_prompt_user(
task = "Your task is to extract information from a text provided.",
instructions = "You should extract the first and last words of the text.",
output = "Return the first and last words of the text separated by a dash, i.e., `first - last`.",
style = "Do not add any additional information, return only the requested information.",
examples = "
# Examples:
text: 'This is an example text.'
output: 'This - text'
text: 'Another example text!!!'
output: 'Another - text'",
text = "Nel mezzo del cammin di nostra vita mi ritrovai per una selva oscura"
)
usr_prompt
compose_prompt_api(sys_prompt, usr_prompt) |>
query_gpt() |>
get_content()
```

## Querying a column of a dataframe

You can use the `query_gpt_on_column` function to query the GPT API on a column of a dataframe. This function is useful because it helps you to iterate the query on each row of the column and to compose the prompt automatically adopting the required API's structure. In this case, you need to provide the components of the prompt creating the prompt template, and the name of the column you what to embed in the template as a "text" to query. All the prompt's components are optional, so you can provide only the ones you need: `role` and `context` compose the system prompt, while `task`, `instructions`, `output`, `style`, and `examples` compose the user prompt (they will be just juxtaposed in the right order)

```{r example}
db <- data.frame(
commenti = c(
"Che barba, che noia!",
"Un po' noioso, ma interessante",
"Che bello, mi è piaciuto molto!"
txt = c(
"I'm very satisfied with the course; it was very interesting and useful.",
"I didn't like it at all; it was deadly boring.",
"The best course I've ever attended.",
"The course was a waste of time.",
"blah blah blah",
"woow",
"bim bum bam"
)
)
role <- "Sei l'assistente di un docente universitario."
context <- "State analizzando i commenti degli studenti dell'ultimo corso."
# system
role <- "You are the assistant of a university professor."
context <- "You are analyzing the comments of the students of the last course."
task <- "Il tuo compito è capire se sono soddisfatti del corso."
instructions <- "Analizza i commenti e decidi se sono soddisfatti o meno."
output <- "Riporta 'soddisfatto' o 'insoddisfatto', in caso di dubbio o impossibilità riporta 'NA'."
style <- "Non aggiungere nessun commento, restituisci solo ed esclusivamente una delle classificazioni possibile."
# user
task <- "Your task is to understand if they are satisfied with the course."
instructions <- "Analyze the comments and decide if they are satisfied or not."
output <- "Report 'satisfied' or 'unsatisfied', in case of doubt or impossibility report 'NA'."
style <- "Do not add any comment, return only and exclusively one of the possible classifications."
examples <- "
commento_1: 'Mi è piaciuto molto il corso; davvero interessante.'
classificazione_1: 'soddisfatto'
commento_2: 'Non mi è piaciuto per niente; una noia mortale'
classificazione_2: 'insoddisfatto'
"
# Examples:
text: 'I'm very satisfied with the course; it was very interesting and useful.'
output: 'satisfied'
text: 'I didn't like it at all; it was deadly boring.'
output: 'unsatisfied'"
res <- db |>
db |>
query_gpt_on_column(
"commenti",
"txt",
role = role,
context = context,
task = task,
Expand All @@ -72,9 +141,33 @@ res <- db |>
style = style,
examples = examples
)
res
```

## Base ChatGPT prompt creation (NOT for API)

You can use the `compose_prompt` function to create a prompt for ChatGPT. This function is useful because it helps you to compose the prompt following best practices in composing prompt. In fact the arguments are just the main components every prompt should have. They do just that, composing the prompt for you juxtaposing the components in the right order. The result is suitable to be copy-pasted on ChatGPT, not to be used with API calls, i.e., it cannot be used with the `query_gpt` function!!

```{r}
compose_prompt(
role = "You are the assistant of a university professor.",
context = "You are analyzing the comments of the students of the last course.",
task = "Your task is to extract information from a text provided.",
instructions = "You should extract the first and last words of the text.",
output = "Return the first and last words of the text separated by a dash, i.e., `first - last`.",
style = "Do not add any additional information, return only the requested information.",
examples = "
# Examples:
text: 'This is an example text.'
output: 'This - text'
text: 'Another example text!!!'
output: 'Another - text'",
text = "Nel mezzo del cammin di nostra vita mi ritrovai per una selva oscura"
)
```

![https://chat.openai.com/share/394a008b-d463-42dc-9361-1bd745bcad6d](dev/img/gpt-example.png)

## Code of Conduct

Please note that the ubep.gpt project is released with a [Contributor Code of Conduct](https://contributor-covenant.org/version/2/1/CODE_OF_CONDUCT.html). By contributing to this project, you agree to abide by its terms.
Expand Down
Loading

0 comments on commit 589c383

Please sign in to comment.