Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Suggestion] Using Exponential Backoff to avoid LLM Rate Limit Errors #582

Open
gssakash-SxT opened this issue Jul 23, 2024 · 0 comments
Open

Comments

@gssakash-SxT
Copy link

gssakash-SxT commented Jul 23, 2024

I'm probably over-thinking this since the current models are a lot more powerful but I was wondering if it'd be worth it to use Exponential backoff here to avoid having Mentat automatically stop in the case of a API rate limit from the LLM in use over here?

The tenacity python library as recommended by OpenAI in one of their cook books can be used to achieve this very easily but I wanted to know if it solves the problem and your guys's thoughts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant