-
Notifications
You must be signed in to change notification settings - Fork 8.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(OpenAI Node, Basic LLM Chain Node, Tool Agent Node): Better OpenAI API rate limit errors #10797
fix(OpenAI Node, Basic LLM Chain Node, Tool Agent Node): Better OpenAI API rate limit errors #10797
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left a few comments to clarify whether this is actually only triggered by OpenAI models or not.
if (error instanceof NodeApiError) { | ||
// If the error is an OpenAI's rate limit error, we want to handle it differently | ||
// because OpenAI has multiple different rate limit errors | ||
const openAiErrorCode: string | undefined = (error.cause as any).error?.code; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do you know, at this point, that it's an actual OpenAI error, and not some other API error that happens to have an error-code? Please check if this check is correct, and/or if the name of the variable could signal something better :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, I've added a check that the underlying error comes from OpenAI
packages/@n8n/nodes-langchain/nodes/vendors/OpenAi/helpers/error-handling.ts
Show resolved
Hide resolved
cf7976f
to
780cb8d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be cool if you could add the "forbidden" error to the error messages too!
✅ All Cypress E2E specs passed |
n8n Run #6935
Run Properties:
|
Project |
n8n
|
Branch Review |
ai-313-openai-is-returning-429-errors-quite-often
|
Run status |
Passed #6935
|
Run duration | 04m 43s |
Commit |
780cb8dbce: 🌳 🖥️ browsers:node18.12.0-chrome107 🤖 burivuhster 🗃️ e2e/*
|
Committer | Eugene Molodkin |
View all properties for this run ↗︎ |
Test results | |
---|---|
Failures |
0
|
Flaky |
3
|
Pending |
0
|
Skipped |
0
|
Passing |
430
|
View all changes introduced in this branch ↗︎ |
Got released with |
…I API rate limit errors (#10797)
Summary
In this PR some custom handling for the OpenAI rate limit errors is introduced.
According to OpenAI's docs, there might be two sets of errors ending up with 429 HTTP response: rate limit errors and insufficient quota errors (https://platform.openai.com/docs/guides/error-codes/api-errors)
We can look at the code in the response to see what kind of error it was.
Also, we override a default message for those errors with a more descriptive and openai-specific ones.
This should also help with the issue, when a long poorly formatted text is displayed in some cases of rate limit errors.
Related Linear tickets, Github issues, and Community forum posts
https://linear.app/n8n/issue/AI-313/openai-is-returning-429-errors-quite-often
https://linear.app/n8n/issue/AI-342/bad-error-in-openai-node
Review / Merge checklist
release/backport
(if the PR is an urgent fix that needs to be backported)