Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

key error: 'content' #2

Open
jackprosperpath opened this issue Apr 6, 2024 · 10 comments
Open

key error: 'content' #2

jackprosperpath opened this issue Apr 6, 2024 · 10 comments

Comments

@jackprosperpath
Copy link

KeyError Traceback (most recent call last)
in <cell line: 112>()
111 subtopic_reports = []
112 for subtopic in subtopic_checklist:
--> 113 subtopic_report = generate_subtopic_report(subtopic)
114 subtopic_reports.append(subtopic_report)
115

1 frames
in generate_text(prompt, model, max_tokens, temperature)
19 response = requests.post("https://api.anthropic.com/v1/messages", headers=headers, json=data)
20 print(response.json())
---> 21 response_text = response.json()['content'][0]['text']
22 print(remove_first_line(response_text.strip()))
23 return remove_first_line(response_text.strip())

KeyError: 'content'

@hiroto172516
Copy link

me too

@deesatzed
Copy link

I added an error check to the generate text function

def generate_text(prompt, model="claude-3-haiku-20240307", max_tokens=2000, temperature=0.7):
headers = {
"x-api-key": ANTHROPIC_API_KEY,
"anthropic-version": "2023-06-01",
"content-type": "application/json"
}
data = {
"model": model,
"max_tokens": max_tokens,
"temperature": temperature,
"system": "You are a world-class researcher. Analyze the given information and generate a well-structured report.",
"messages": [{"role": "user", "content": prompt}],
}
response = requests.post("https://api.anthropic.com/v1/messages", headers=headers, json=data)

# Debug: Print the response JSON for inspection
response_json = response.json()
print(response_json)

# Check if 'content' key exists in the response JSON
if 'content' in response_json and response_json['content']:
    response_text = response_json['content'][0]['text']
    print(remove_first_line(response_text.strip()))
    return remove_first_line(response_text.strip())
else:
    # Handle the missing 'content' key
    print("Error: 'content' key not found in the response.")
    return ""

I did hit the anthropic rate limit though...

@dachris88
Copy link

It's likely a rate-limit issue. Similar to this mshumer/gpt-author#33 (comment)
Solution is to upgrade to Build tier. You can just add $5 to your account in Build tier and it will solve the issue.

@iraadit
Copy link

iraadit commented Apr 7, 2024

Just upgraded to the Build tier, and I still have the same issue (rate limit).

@hmamawal
Copy link

hmamawal commented Apr 8, 2024

I just upgraded to the build tier as well, and I'm getting the same error with the rate limit.

@pas-mllr
Copy link

pas-mllr commented Apr 8, 2024

The rate limit error usually happens with AI researcher agents like these. Did anyone find a solution yet?

Edit: the trick is to introduce a delay at both api calls. It depends on your API build tier what your rate limit is. Here's example code for 61 second delay for build tier 1.

# Introducing a delay to prevent rate limit errors
time.sleep(61)  # Delay for 61 seconds (adjust as needed based on the API's rate limit)

Edit2: hmm I still run into the tokens per day (TPD) rate limit of the build 1 tier

@dachris88
Copy link

Also you may want to check your API key requests. Check how many request per minute, how many token per day used, and token per minute that your key is doing across all of your applications. I know a friend who's running into the same issue because he has multiple agents that sending more than 50 requests per minute to api.antropic.com.

Also it has been noted in several reddit threads that if you run concurrent requests you may be throttled even if your request per minute is well below the limit.

@FantomShift
Copy link

Looks like Build Tier 1 supports 50.000 tokens per minute. When I ran the agent and checked my usage it was up to 89k in under a minute before it gave me the error. Tier 2 is the minimum you'll need to use this but even that is a 100k tokens per minute.

@tristan-mcinnis
Copy link

Any ideas on this? Chatgpt basically said I need to create a buffer with an adaptive delay? but i haven't tested it yet.

def request_with_adaptive_delay(url, headers, data=None, method='get'):
max_attempts = 5
for attempt in range(max_attempts):
response = requests.request(method, url, headers=headers, json=data) if method == 'post' else requests.get(url, headers=headers)
if response.status_code == 200:
return response
elif response.status_code == 429:
retry_after = int(response.headers.get('Retry-After', 60))
print(f"Rate limit exceeded. Retrying after {retry_after} seconds.")
time.sleep(retry_after)
else:
print(f"Request failed: Status code {response.status_code}, Response: {response.text}")
time.sleep(5) # Wait 5 seconds before retrying for other errors
return None # Return None if all attempts fail

@elvinagam
Copy link

Just remove the part for generating additional search queries. Instead, continue the search with initial queries. Commenting it out like this below will work so that you don't hit rate limits.

print(f"Generating additional search queries for subtopic: {subtopic}...")
        # additional_queries_prompt = f"Here are the search results so far for the subtopic '{subtopic}':\n\n{str(search_data)}\n\n---\n\nHere are all the search queries you have used so far for this subtopic:\n\n{str(all_queries)}\n\n---\n\nBased on the search results and previous queries, generate 3 new and unique search queries to expand the knowledge on the subtopic '{subtopic}'. Return your queries in a Python-parseable list. Return nothing but the list. Do so in one line. Start your response with [\""
        # additional_queries = ast.literal_eval('[' + generate_text(additional_queries_prompt).split('[')[1])
        all_queries = initial_queries
        # initial_queries = additional_queries
        # all_queries.extend(additional_queries)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants