Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Auto-switch to gpt-35-turbo, gpt-4 and gpt-4-32k when number of tokens exceeded by query #3367

Closed
1 task done
NKT00 opened this issue Apr 26, 2023 · 14 comments
Closed
1 task done
Labels
AI efficacy AI model limitation Not related to AutoGPT directly. enhancement New feature or request Stale

Comments

@NKT00
Copy link

NKT00 commented Apr 26, 2023

Duplicates

  • I have searched the existing issues

Summary 💡

There's a few bug reports close to this, but, would it not make sense to get rid of the error
SYSTEM: Command get_text_summary returned: Error: This model's maximum context length is 4097 tokens. However, your messages resulted in 5113 tokens. Please reduce the length of the messages.
by simply swapping model, just for that query, when the length of the query is below the limit for another model?

gpt-35-turbo is 4096 tokens, whereas the token limits for gpt-4 and gpt-4-32k are 8192 and 32768 respectively. This could be implemented easily.

Examples 🌈

No response

Motivation 🔦

Everything that pulls a website page fails, as the webpages are too big, generally. However, some are only slightly too big, and could be run through a different model to downsize them first.

@NKT00 NKT00 changed the title Auto-switch to GPT3.5 when number of tokens for GPT4 exceeded by query Auto-switch to gpt-35-turbo, gpt-4 and gpt-4-32k when number of tokens exceeded by query Apr 26, 2023
@dimitar-d-d
Copy link

I strongly support this proposal. This should be easy to implement and would definitely help make this thing being actually useful.

@Boostrix
Copy link
Contributor

you could probably use some sort of preprocessor/preparation stage prior to passing such contexts to the LLM

@dimitar-d-d
Copy link

you could probably use some sort of preprocessor/preparation stage prior to passing such contexts to the LLM

Yep. I could do that and I do it. I end up splitting my text assignments into three separate runs of AutoGPT just to not to get an error... This, however, is time consuming and impractical.

The ability of the tool to dynamically call the larger LLM model when applicable, and combined with better chunking, will definitely reduce the amount of fatal errors.

@Rykimaruh
Copy link

has there been any workaround to this? I thought auto-gpt used gpt-4 which had greater token limit than 3.5 but I'm still getting the 4097 max token limit

@Boostrix
Copy link
Contributor

Boostrix commented May 5, 2023

it depends on the level of OpenAI API access you've got

@anonhostpi
Copy link

I'd like to say that there should be a switching mechanism that switches between all of the supported APIs, not just OpenAI's models/APIs.

@p-i- perhaps if and when the repository gets around to implementing the APIs as plugins, maybe add a plugin object that reports the rate limit associated with that API, so that AutoGPT can completely switch plugins, not just models.

@Boostrix
Copy link
Contributor

Boostrix commented May 5, 2023

t there should be a switching mechanism that switches between all of the supported APIs, not just OpenAI's models/APIs.

Which seems to be work in progress #2158

maybe add a plugin object that reports the rate limit associated with that API, so that AutoGPT can completely switch plugins, not just models.

👍 the basic idea is this #3466

@anonhostpi
Copy link

love ya boostrix, which one are you on the Discord Server?

@k-boikov k-boikov added enhancement New feature or request AI model limitation Not related to AutoGPT directly. AI efficacy labels May 6, 2023
@Boostrix
Copy link
Contributor

Boostrix commented May 9, 2023

I'd like to say that there should be a switching mechanism that switches between all of the supported APIs, not just OpenAI's models/APIs.

that's a form of feature scaling, and #3466 - #528

but agreed, if one model fails, there should be an option try another one - even if that's not the preferred one

@kinance
Copy link
Contributor

kinance commented Jun 13, 2023

To fix this issue, the batch summarization approach introduced by the PR #4652 can also be applied to summarize_text function in text.py

@unitythemaker
Copy link

unitythemaker commented Jun 14, 2023

gpt-3.5-turbo-16k is here.

@github-actions
Copy link
Contributor

github-actions bot commented Sep 6, 2023

This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days.

@github-actions github-actions bot added the Stale label Sep 6, 2023
@github-actions
Copy link
Contributor

This issue was closed automatically because it has been stale for 10 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 19, 2023
@prathamesh-0909
Copy link

This issue was closed automatically because it has been stale for more than 10 days without any activity.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
AI efficacy AI model limitation Not related to AutoGPT directly. enhancement New feature or request Stale
Projects
None yet
Development

No branches or pull requests

9 participants