Skip to content

Commit

Permalink
feat: using GPT-4 as default
Browse files Browse the repository at this point in the history
  • Loading branch information
emanuel-braz committed Nov 8, 2023
1 parent 99824af commit 4cbf1fe
Show file tree
Hide file tree
Showing 8 changed files with 8 additions and 8 deletions.
2 changes: 1 addition & 1 deletion code-review/action.js
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ async function getAIResponse(messages) {
const chatCompletionParams = new ChatCompletionParams({
messages: messages,
model: OPENAI_API_MODEL,
temperature: 0.2,
temperature: 0.1,
max_tokens: 900,
top_p: 1,
frequency_penalty: 0,
Expand Down
2 changes: 1 addition & 1 deletion code-review/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ inputs:
openai_key_model:
description: "OpenAI API model."
required: false
default: "gpt-3.5-turbo"
default: "gpt-4"
exclude:
description: "Glob patterns to exclude files from the diff analysis"
required: false
Expand Down
2 changes: 1 addition & 1 deletion generate-enhanced-notes/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ inputs:
description: The maximum number of tokens to generate. Default 500.
required: false
model:
description: The model to use to generate the release notes. Default gpt-3.5-turbo.
description: The model to use to generate the release notes. Default gpt-4.
required: false
token:
description: The token to use to create the release
Expand Down
2 changes: 1 addition & 1 deletion services/gpt/gpt_service.js
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ class GptService {
],
max_tokens: parseInt(maxTokens) || 500,
n: 1,
model: model || 'gpt-3.5-turbo',
model: model || 'gpt-4',
});

const generatedNotes = response.choices[0].message.content;
Expand Down
2 changes: 1 addition & 1 deletion services/simple_chat_gpt_service.js
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ class SimpleChatGptService {
],
max_tokens: parseInt(maxTokens) || 500,
n: 1,
model: model || 'gpt-3.5-turbo',
model: model || 'gpt-4',
});

const response = await this.service.chatCompletions(params);
Expand Down
2 changes: 1 addition & 1 deletion simple-chat-gpt/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ jobs:
**Optional** The maximum number of tokens to generate. Defaults to `500`.

#### model
**Optional** The model to use. Defaults to `gpt-3.5-turbo`.
**Optional** The model to use. Defaults to `gpt-4`.

---
### Outputs
Expand Down
2 changes: 1 addition & 1 deletion simple-chat-gpt/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ inputs:
description: The maximum number of tokens to generate. Default 500.
required: false
model:
description: The model to use to generate the message. Default gpt-3.5-turbo.
description: The model to use to generate the message. Default gpt-4.
required: false

outputs:
Expand Down
2 changes: 1 addition & 1 deletion simple-chat-gpt/simple_chat_gpt.js
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ class SimpleChatGpt {
const openaiKey = core.getInput('openai_key') || process.env.OPENAI_KEY;
const prompt = core.getInput('prompt') || 'Say only this: "Hello World from SimpleChatGpt!"';
const maxTokens = core.getInput('max_tokens') || 500
const model = core.getInput('model') || 'gpt-3.5-turbo';
const model = core.getInput('model') || 'gpt-4';

const simpleChatGptService = new SimpleChatGptService(openaiKey);
const message = await simpleChatGptService.call({prompt, maxTokens, model});
Expand Down

0 comments on commit 4cbf1fe

Please sign in to comment.