-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
key error: 'content' #2
Comments
me too |
I added an error check to the generate text function def generate_text(prompt, model="claude-3-haiku-20240307", max_tokens=2000, temperature=0.7):
I did hit the anthropic rate limit though... |
It's likely a rate-limit issue. Similar to this mshumer/gpt-author#33 (comment) |
Just upgraded to the Build tier, and I still have the same issue (rate limit). |
I just upgraded to the build tier as well, and I'm getting the same error with the rate limit. |
The rate limit error usually happens with AI researcher agents like these. Did anyone find a solution yet? Edit: the trick is to introduce a delay at both api calls. It depends on your API build tier what your rate limit is. Here's example code for 61 second delay for build tier 1.
Edit2: hmm I still run into the tokens per day (TPD) rate limit of the build 1 tier |
Also you may want to check your API key requests. Check how many request per minute, how many token per day used, and token per minute that your key is doing across all of your applications. I know a friend who's running into the same issue because he has multiple agents that sending more than 50 requests per minute to api.antropic.com. Also it has been noted in several reddit threads that if you run concurrent requests you may be throttled even if your request per minute is well below the limit. |
Looks like Build Tier 1 supports 50.000 tokens per minute. When I ran the agent and checked my usage it was up to 89k in under a minute before it gave me the error. Tier 2 is the minimum you'll need to use this but even that is a 100k tokens per minute. |
Any ideas on this? Chatgpt basically said I need to create a buffer with an adaptive delay? but i haven't tested it yet. def request_with_adaptive_delay(url, headers, data=None, method='get'): |
Just remove the part for generating additional search queries. Instead, continue the search with
|
KeyError Traceback (most recent call last)
in <cell line: 112>()
111 subtopic_reports = []
112 for subtopic in subtopic_checklist:
--> 113 subtopic_report = generate_subtopic_report(subtopic)
114 subtopic_reports.append(subtopic_report)
115
1 frames
in generate_text(prompt, model, max_tokens, temperature)
19 response = requests.post("https://api.anthropic.com/v1/messages", headers=headers, json=data)
20 print(response.json())
---> 21 response_text = response.json()['content'][0]['text']
22 print(remove_first_line(response_text.strip()))
23 return remove_first_line(response_text.strip())
KeyError: 'content'
The text was updated successfully, but these errors were encountered: