diff --git a/DESCRIPTION b/DESCRIPTION index 46604b7..d042c28 100644 --- a/DESCRIPTION +++ b/DESCRIPTION @@ -1,13 +1,10 @@ Package: ubep.gpt -Title: A basic/simple interface to OpenAI’s GPT API +Title: A basic and simple interface to OpenAI’s GPT API Version: 0.2.9 Authors@R: person("Corrado", "Lanera", , "corrado.lanera@ubep.unipd.it", role = c("aut", "cre"), comment = c(ORCID = "0000-0002-0520-7428")) -Description: The goal of '{ubep.gpt}' is to provide a basic/simple - interface to OpenAI's GPT API. The package is designed to work with - (i.e., to query on) dataframes/tibbles, and to simplify the process of - querying the API. +Description: The goal of 'ubep.gpt' is to provide a basic and simple interface to OpenAI's GPT API (and other compatible APIs). The package is also designed to work with (i.e., to query on) dataframes/tibbles, and to simplify the process of querying the API. License: MIT + file LICENSE URL: https://github.com/UBESP-DCTV/ubep.gpt, https://ubesp-dctv.github.io/ubep.gpt/ diff --git a/README.Rmd b/README.Rmd index b7188ea..b291fc9 100644 --- a/README.Rmd +++ b/README.Rmd @@ -37,7 +37,7 @@ You can use the `query_gpt` function to query the GPT API. You can decide the mo To use the function you need to compose a prompt. You can use (but it is not necessary!) the `compose_prompt_api` function to compose the prompt properly with an optional (single) system prompt (i.e., gpt's setup) and a (single) user prompt (i.e., the query). This function is useful because it helps you to compose the prompt automatically adopting the required API's structure. -> NOTE: you can still pass a correcly formatted list (of lists) as described in the [official documentation](https://platform.openai.com/docs/api-reference/chat) (). +> NOTE: you can still pass a correctly formatted list (of lists) as described in the [official documentation](https://platform.openai.com/docs/api-reference/chat) (). Once you have queried the API, you can extract the content of the response using the `get_content` function. You can also extract the tokens of the prompt and the response using the `get_tokens` function. @@ -255,7 +255,7 @@ cat(res) # limited to 30 tokens! ### Python's backend -Often, for complex prompt it happens that the R environment (everyone we have experiemnted, i.e. `{openai}`, `{httr}`, `{httr2}`, and `curl`) return a timeout error for a certificate validation (see, e.g.: https://github.com/irudnyts/openai/issues/61, and https://github.com/irudnyts/openai/issues/42). The same does not happen with a pure python backend usign the official OpenAI's `{openai}` library. you can setup a Python backend by executing `setup_py()`, and setting `use_py = TRUE` in the functions that send the queries (i.e., `query_gpt`, `query_gpt_on_column`, and `get_completion_from_messages`) +Often, for complex prompt it happens that the R environment (everyone we have experimented, i.e. `{openai}`, `{httr}`, `{httr2}`, and `curl`) return a timeout error for a certificate validation (see, e.g.: https://github.com/irudnyts/openai/issues/61, and https://github.com/irudnyts/openai/issues/42). The same does not happen with a pure python backend using the official OpenAI's `{openai}` library. you can setup a Python backend by executing `setup_py()`, and setting `use_py = TRUE` in the functions that send the queries (i.e., `query_gpt`, `query_gpt_on_column`, and `get_completion_from_messages`) > NOTE: using a Python backend can be a little slower, but sometimes necessary. @@ -274,11 +274,11 @@ cat(res) ### Personalized server's endpoint -If you have a personal server asking for queries using the OpenAI's API format, (e.g. using StudioLM, with open source models), you can set the endpoint to POST the query on your server instead of the OpenaAI one. +If you have a personal server asking for queries using the OpenAI's API format, (e.g. using LM Studio, with open source models), you can set the endpoint to POST the query on your server instead of the OpenaAI one. -> NOTE: when using personalized server endpoint, you can select the model you would like to use as usual by the `model` option. Clearly, avalilable models depend on your local server configuration. +> NOTE: when using personalized server endpoint, you can select the model you would like to use as usual by the `model` option. Clearly, available models depend on your local server configuration. -> WARNIGN: this option cannot be select if Python backend is request (i.e., setting `use_py = TRUE`, and a custom `endpoint` won't work)! +> WARNING: this option cannot be select if Python backend is request (i.e., setting `use_py = TRUE`, and a custom `endpoint` won't work)! ```{r} if (FALSE) { # we do not run this in the README diff --git a/README.md b/README.md index a7da30b..43c68a6 100644 --- a/README.md +++ b/README.md @@ -39,7 +39,7 @@ a (single) user prompt (i.e., the query). This function is useful because it helps you to compose the prompt automatically adopting the required API’s structure. -> NOTE: you can still pass a correcly formatted list (of lists) as +> NOTE: you can still pass a correctly formatted list (of lists) as > described in the [official > documentation](https://platform.openai.com/docs/api-reference/chat) > (). @@ -95,40 +95,40 @@ res <- query_gpt( ) #> ℹ Total tries: 1. #> ℹ Prompt token used: 29. -#> ℹ Response token used: 83. -#> ℹ Total token used: 112. +#> ℹ Response token used: 100. +#> ℹ Total token used: 129. str(res) #> List of 7 -#> $ id : chr "chatcmpl-9PpAnZZwHo5hbUew4uUzbm5wsiMG9" +#> $ id : chr "chatcmpl-9PzusvOFD57oVSnHWcp3JKk9Kr75U" #> $ object : chr "chat.completion" -#> $ created : int 1715941937 +#> $ created : int 1715983234 #> $ model : chr "gpt-3.5-turbo-0125" #> $ choices :'data.frame': 1 obs. of 5 variables: #> ..$ index : int 0 #> ..$ logprobs : logi NA -#> ..$ finish_reason : chr "stop" +#> ..$ finish_reason : chr "length" #> ..$ message.role : chr "assistant" -#> ..$ message.content: chr "I supported Professor Smith with his advanced mathematics course last semester. The course covered topics such "| __truncated__ +#> ..$ message.content: chr "The last course that our professor provided was on Advanced Machine Learning. This course delved into more comp"| __truncated__ #> $ usage :List of 3 #> ..$ prompt_tokens : int 29 -#> ..$ completion_tokens: int 83 -#> ..$ total_tokens : int 112 +#> ..$ completion_tokens: int 100 +#> ..$ total_tokens : int 129 #> $ system_fingerprint: NULL get_content(res) -#> [1] "I supported Professor Smith with his advanced mathematics course last semester. The course covered topics such as linear algebra, differential equations, and complex analysis. I helped Professor Smith develop homework assignments, review materials, and conduct study sessions to assist students in understanding the challenging concepts in the course. Overall, the course was a great success, and the students were able to delve deep into these complex mathematical topics under Professor Smith's guidance." +#> [1] "The last course that our professor provided was on Advanced Machine Learning. This course delved into more complex machine learning techniques such as deep learning, reinforcement learning, and unsupervised learning. Students learned about cutting-edge algorithms and applications in areas such as natural language processing, computer vision, and recommendation systems. The course also had a significant practical component with coding assignments and a final project where students applied their knowledge to analyze real-world datasets. It was a challenging yet engaging course that provided students with valuable skills for" # for a well formatted output on R, use `cat()` get_content(res) |> cat() -#> I supported Professor Smith with his advanced mathematics course last semester. The course covered topics such as linear algebra, differential equations, and complex analysis. I helped Professor Smith develop homework assignments, review materials, and conduct study sessions to assist students in understanding the challenging concepts in the course. Overall, the course was a great success, and the students were able to delve deep into these complex mathematical topics under Professor Smith's guidance. +#> The last course that our professor provided was on Advanced Machine Learning. This course delved into more complex machine learning techniques such as deep learning, reinforcement learning, and unsupervised learning. Students learned about cutting-edge algorithms and applications in areas such as natural language processing, computer vision, and recommendation systems. The course also had a significant practical component with coding assignments and a final project where students applied their knowledge to analyze real-world datasets. It was a challenging yet engaging course that provided students with valuable skills for get_tokens(res) -#> [1] 112 +#> [1] 129 get_tokens(res, "prompt") #> [1] 29 get_tokens(res, "all") #> prompt_tokens completion_tokens total_tokens -#> 29 83 112 +#> 29 100 129 ``` ## Easy prompt-assisted creation @@ -425,11 +425,11 @@ cat(res) # limited to 30 tokens! ### Python’s backend Often, for complex prompt it happens that the R environment (everyone we -have experiemnted, i.e. `{openai}`, `{httr}`, `{httr2}`, and `curl`) +have experimented, i.e. `{openai}`, `{httr}`, `{httr2}`, and `curl`) return a timeout error for a certificate validation (see, e.g.: , and ). The same does not -happen with a pure python backend usign the official OpenAI’s `{openai}` +happen with a pure python backend using the official OpenAI’s `{openai}` library. you can setup a Python backend by executing `setup_py()`, and setting `use_py = TRUE` in the functions that send the queries (i.e., `query_gpt`, `query_gpt_on_column`, and `get_completion_from_messages`) @@ -448,20 +448,20 @@ res <- query_gpt( get_content() cat(res) -#> The last course provided by the professor was a graduate-level seminar on "Advanced Topics in Artificial Intelligence." The course covered cutting-edge research in areas such as deep learning, natural language processing, and reinforcement learning. Students were required to read and present research papers, participate in discussions, and complete a final project applying the concepts learned in the course. The professor received positive feedback from students for their engaging teaching style and ability to explain complex topics clearly. +#> The last course that the professor provided was a graduate-level seminar on "Advanced Topics in Artificial Intelligence." The course covered cutting-edge research in areas such as deep learning, natural language processing, and reinforcement learning. Students were required to read and present research papers, participate in discussions, and complete a final project applying the concepts learned in the course. The professor also invited guest speakers from industry and academia to share their expertise and insights with the students. Overall, the course was well-received by the students and provided them with valuable knowledge and skills in the field of artificial intelligence. ``` ### Personalized server’s endpoint If you have a personal server asking for queries using the OpenAI’s API -format, (e.g. using StudioLM, with open source models), you can set the +format, (e.g. using LM Studio, with open source models), you can set the endpoint to POST the query on your server instead of the OpenaAI one. > NOTE: when using personalized server endpoint, you can select the > model you would like to use as usual by the `model` option. Clearly, -> avalilable models depend on your local server configuration. +> available models depend on your local server configuration. -> WARNIGN: this option cannot be select if Python backend is request +> WARNING: this option cannot be select if Python backend is request > (i.e., setting `use_py = TRUE`, and a custom `endpoint` won’t work)! ``` r diff --git a/inst/WORDLIST b/inst/WORDLIST index f32353b..6c29940 100644 --- a/inst/WORDLIST +++ b/inst/WORDLIST @@ -6,11 +6,13 @@ Codecov CorradoLanera GPT Il +LM Lifecycle ORCID OpenAI OpenAI's OpenAI’s +OpenaAI Questa README Sei @@ -41,6 +43,7 @@ dell'assistente dell'utente delle desiderato +deterministically deve di didattica @@ -56,6 +59,8 @@ funzione genere generico gpt +gpt's +gpt’s https il impostare @@ -108,5 +113,5 @@ una usare usr utile +venv è -’s diff --git a/man/ubep.gpt-package.Rd b/man/ubep.gpt-package.Rd index 5aecb64..9511f68 100644 --- a/man/ubep.gpt-package.Rd +++ b/man/ubep.gpt-package.Rd @@ -4,9 +4,9 @@ \name{ubep.gpt-package} \alias{ubep.gpt} \alias{ubep.gpt-package} -\title{ubep.gpt: A basic/simple interface to OpenAI’s GPT API} +\title{ubep.gpt: A basic and simple interface to OpenAI’s GPT API} \description{ -The goal of '{ubep.gpt}' is to provide a basic/simple interface to OpenAI's GPT API. The package is designed to work with (i.e., to query on) dataframes/tibbles, and to simplify the process of querying the API. +The goal of 'ubep.gpt' is to provide a basic and simple interface to OpenAI's GPT API (and other compatible APIs). The package is also designed to work with (i.e., to query on) dataframes/tibbles, and to simplify the process of querying the API. } \seealso{ Useful links: diff --git a/tests/testthat/test-get_completion_from_messages.R b/tests/testthat/test-get_completion_from_messages.R index a73db01..b3f4be0 100644 --- a/tests/testthat/test-get_completion_from_messages.R +++ b/tests/testthat/test-get_completion_from_messages.R @@ -34,6 +34,7 @@ test_that("get_completion_from_messages works", { test_that("get_completion_from_messages works w/ py", { # setup + setup_py() model <- "gpt-3.5-turbo" messages <- compose_prompt_api( sys_prompt = compose_sys_prompt(