-
Notifications
You must be signed in to change notification settings - Fork 10
/
README.qmd
253 lines (195 loc) · 7.61 KB
/
README.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
---
format: gfm
knitr:
opts_chunk:
collapse: true
---
<!-- badges: start -->
[![R-CMD-check](https://github.com/AlbertRapp/tidychatmodels/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/AlbertRapp/tidychatmodels/actions/workflows/R-CMD-check.yaml)
<!-- badges: end -->
# tidychatmodels
![](tidychatmodels.png){width="400px"}
## About this package
This package provides a simple interface to chat with your favorite AI chatbot from R.
It is inspired by the modular nature of `{tidymodels}` where you can easily swap out any ML model for another one but keep the other parts of the workflow the same.
In the same vain, this package aims to communicate with different chatbot vendors like [openAI](https://platform.openai.com/docs/api-reference/making-requests), [mistral.ai](https://docs.mistral.ai/api/), etc. using the same interface.
Basically, this package is a wrapper around the API of different chatbots and provides a unified interface to communicate with them.
The underlying package that handles all the communication is the [`{httr2}`](https://httr2.r-lib.org/) package.
For a deep dive into `{httr2}`, you could check out one of my tutorials [on YouTube](https://youtu.be/hmtE4QGIOuk).
## Video walkthrough
If you want to get a video walkthrough for this package, check out:
<iframe width="560" height="315" src="https://www.youtube.com/embed/RjtADzX-sJY?si=ZdyPZxp3Meaqxt4j" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
## Installation
Currently this package is only available on GitHub. To install it, you will need to use the `devtools` package.
```{.r}
# install.packages("devtools")
devtools::install_github("AlbertRapp/tidychatmodels")
```
## Getting Started
What you will need to get started is an API key from the chatbot vendor you want to use.
For example, to use the openAI chatbot, you will need to sign up for an API key [here](https://platform.openai.com/api-keys).
Once you have that key, you can use it to authenticate with the openAI API.
I recommend saving the key into a `.env` file and loading the key into your R environment using the `{dotenv}` package.
```{r}
#| collapse: true
dotenv::load_dot_env('../.env')
library(tidychatmodels)
chat_openai <- create_chat('openai', Sys.getenv('OAI_DEV_KEY'))
chat_openai
```
Afterwards, you can add a model to the chat object.
In this case, we are adding the `gpt-3.5-turbo` model.
The user is responsible for knowing which models are available at a vendor like OpenAI.
```{r}
chat_openai |>
add_model('gpt-3.5-turbo')
```
Similarly, you can add parameters to the chat object.
```{r}
create_chat('openai', Sys.getenv('OAI_DEV_KEY'))|>
add_model('gpt-3.5-turbo') |>
add_params('temperature' = 0.5, 'max_tokens' = 100)
```
Afterwards, you can add messages to your chat object using different roles.
Typically, you might first use a system manage to set the stage for what you bot is required to do.
Afterwards, you can add a user message.
```{r}
chat_openai <- create_chat('openai', Sys.getenv('OAI_DEV_KEY'))|>
add_model('gpt-3.5-turbo') |>
add_params(temperature = 0.5, max_tokens = 100) |>
add_message(
role = 'system',
message = 'You are a chatbot that completes texts.
You do not return the full text.
Just what you think completes the text.'
) |>
add_message(
# default role = 'user'
'2 + 2 is 4, minus 1 that\'s 3, '
)
chat_openai
```
At this stage, you haven't actually started any chat with the bot.
You can do so by calling the `perform_chat` method.
Beware that this will consume your API calls and will likely incur costs.
Once the chat is performed, you can extract the chat from the chat object.
```{r}
chat_openai <- chat_openai |> perform_chat()
chat_openai |> extract_chat()
```
Excellent!
ChatGPT seems to know the next line of this [glorious song](https://www.youtube.com/watch?v=M3ujv8xdK2w&ab_channel=musiclyrics).
Also, you can save the chat into a tibble.
If you want to surpress the output of the chat, you can use the `silent` parameter.
```{r}
msgs <- chat_openai |> extract_chat(silent = TRUE)
msgs
```
You could add another message to the chat by adding a user message and then performing the chat again.
While we're at it, let's just modify the `temperature` parameter to for this new reply.
```{r}
chat_openai <- chat_openai |>
add_message(
role = 'user',
message = 'Make it cooler!'
) |>
add_params(temperature = 0.9) |>
perform_chat()
chat_openai |> extract_chat()
```
Ah yes, that's much cooler.
But beware, this sent the whole chat again and consumed another API call.
### Switching to another vendor
Let's recap our full workflow.
```{r}
#| eval: false
create_chat('openai', Sys.getenv('OAI_DEV_KEY')) |>
add_model('gpt-3.5-turbo') |>
add_params(temperature = 0.5, max_tokens = 100) |>
add_message(
role = 'system',
message = 'You are a chatbot that completes texts.
You do not return the full text.
Just what you think completes the text.'
) |>
add_message(
# default role = 'user'
'2 + 2 is 4, minus 1 that\'s 3, '
) |>
perform_chat()
```
You can easily switch so some other vendor now.
For example, let's go for the `mistral-large-latest` model from [Mistral.ai](https://docs.mistral.ai/api/).
```{r}
mistral_chat <- create_chat('mistral', Sys.getenv('MISTRAL_DEV_KEY')) |>
add_model('mistral-large-latest') |>
add_params(temperature = 0.5, max_tokens = 100) |>
add_message(
role = 'system',
message = 'You are a chatbot that completes texts.
You do not return the full text.
Just what you think completes the text.'
) |>
add_message(
# default role = 'user'
'2 + 2 is 4, minus 1 that\'s 3, '
) |>
perform_chat()
mistral_chat |> extract_chat()
```
## Supported vendors
Currently only supported vendors are openAI, mistral.ai and [ollama](https://ollama.com/).
ollama allows you to deploy local LLMs and chat with them through `localhost`.
The ollama's API is a bit different but on our `tidychatmodels` interface everything should still works the same or be at least very similar.
For example, creating a chat works pretty much the same but doesn't require an API key.
```{r}
create_chat('ollama')
```
Notice how there is already a parameter `stream` that is set to `false`.
This is a change in the API of the ollama chat engine.
You see, by default ollama will stream the reply token by token.
But `{httr2}` doesn't want that (or rather I didn't bother looking into how to do that with `{httr2}`).
So that's why by default we set `stream` to `false`.
Now, you can add a local model to your chat.
So, assume that you ran (outside of R)
```{.r}
ollama pull gemma:7b
```
Then you can add the model to your chat object.
```{r}
create_chat('ollama') |>
add_model('gemma:7b')
```
And just like before, you can add messages and perform the chat.
Beware though that for some models system messages are not actually working like for openAI or mistral.ai models.
So here I've only added a user message.
```{r}
ollama_chat <- create_chat('ollama') |>
add_model('gemma:7b') |>
add_message('What is love? IN 10 WORDS.') |>
perform_chat()
ollama_chat |>
extract_chat()
```
And adding more messages works too.
```{r}
ollama_chat <- ollama_chat |>
add_message('Now describe hate in 10 words') |>
perform_chat()
ollama_chat |>
extract_chat()
```
```{r}
msgs <- ollama_chat |> extract_chat()
ollama_chat |>
add_message(
paste(
'You said: "',
msgs$message[2],
msgs$message[4],
'" Is there a relationship between these two?'
)
) |>
perform_chat() |>
extract_chat()
```