Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CopilotChat: Minor refactor of token limits and reduce to 4096 for gpt-3.5-turbo. #929

Merged
merged 3 commits into from
May 11, 2023

Conversation

adrianwyatt
Copy link
Contributor

Motivation and Context

CopilotChat is blowing out token limits.

Description

  • Reduced default token limit to 4096 to match gpt-3.5-turbo
  • Elevated token limits to prompts.json so they can be adjusted along with other configuration values, like models.

@adrianwyatt adrianwyatt self-assigned this May 11, 2023
amsacha
amsacha previously approved these changes May 11, 2023
glahaye
glahaye previously approved these changes May 11, 2023
@adrianwyatt adrianwyatt dismissed stale reviews from glahaye and amsacha via f20c142 May 11, 2023 01:44
@adrianwyatt adrianwyatt added .NET Issue or Pull requests regarding .NET code PR: ready to merge PR has been approved by all reviewers, and is ready to merge. labels May 11, 2023
@adrianwyatt adrianwyatt enabled auto-merge (squash) May 11, 2023 01:45
@adrianwyatt adrianwyatt merged commit 38be6f6 into microsoft:main May 11, 2023
shawncal pushed a commit to johnoliver/semantic-kernel that referenced this pull request May 19, 2023
…t-3.5-turbo. (microsoft#929)

### Motivation and Context
CopilotChat is blowing out token limits.

### Description
- Reduced default token limit to 4096 to match gpt-3.5-turbo
- Elevated token limits to prompts.json so they can be adjusted along
with other configuration values, like models.
@adrianwyatt adrianwyatt deleted the token-limit branch May 22, 2023 17:36
dehoward pushed a commit to lemillermicrosoft/semantic-kernel that referenced this pull request Jun 1, 2023
…t-3.5-turbo. (microsoft#929)

### Motivation and Context
CopilotChat is blowing out token limits.

### Description
- Reduced default token limit to 4096 to match gpt-3.5-turbo
- Elevated token limits to prompts.json so they can be adjusted along
with other configuration values, like models.
golden-aries pushed a commit to golden-aries/semantic-kernel that referenced this pull request Oct 10, 2023
…t-3.5-turbo. (microsoft#929)

### Motivation and Context
CopilotChat is blowing out token limits.

### Description
- Reduced default token limit to 4096 to match gpt-3.5-turbo
- Elevated token limits to prompts.json so they can be adjusted along
with other configuration values, like models.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
.NET Issue or Pull requests regarding .NET code PR: ready to merge PR has been approved by all reviewers, and is ready to merge.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants