You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
When I put a huge doc (like main.py code) into conversation. MemGPT keeping throw out error :
error = This model's maximum context length is 4097 tokens. However, your messages resulted in 8638 tokens (7919 in the messages, 719 in the functions). Please reduce the length of the messages or functions.
# If we got a context alert, try trimming the messages length, then try again
if "maximum context length" in str(e):
# A separate API call to run a summarizer
await self.summarize_messages_inplace()
# Try step again
return await self.step(user_message, first_message=first_message)
async def summarize_messages_inplace(self, cutoff=None, preserve_last_N_messages=True):
assert self.messages[0]["role"] == "system", f"self.messages[0] should be system (instead got {self.messages[0]})"
# Start at index 1 (past the system message),
# and collect messages for summarization until we reach the desired truncation token fraction (eg 50%)
# Do not allow truncation of the last N messages, since these are needed for in-context examples of function calling
token_counts = [count_tokens(str(msg)) for msg in self.messages]
message_buffer_token_count = sum(token_counts[1:]) # no system message
token_counts = token_counts[1:]
desired_token_count_to_summarize = int(message_buffer_token_count * MESSAGE_SUMMARY_TRUNC_TOKEN_FRAC)
candidate_messages_to_summarize = self.messages[1:]
if preserve_last_N_messages:
candidate_messages_to_summarize = candidate_messages_to_summarize[:-MESSAGE_SUMMARY_TRUNC_KEEP_N_LAST]
token_counts = token_counts[:-MESSAGE_SUMMARY_TRUNC_KEEP_N_LAST]
`
Can not find the latest message (main.py code which I type as input), so it can't summarise the message.
Describe the bug
When I put a huge doc (like main.py code) into conversation. MemGPT keeping throw out error :
error = This model's maximum context length is 4097 tokens. However, your messages resulted in 8638 tokens (7919 in the messages, 719 in the functions). Please reduce the length of the messages or functions.
and the code (https://github.com/cpacker/MemGPT/blob/7d6aa4096d5118abc1d9bc2c9a298ed88a9e0da8/memgpt/agent.py#L1117) run in
`
`
and then in the code (https://github.com/cpacker/MemGPT/blob/7d6aa4096d5118abc1d9bc2c9a298ed88a9e0da8/memgpt/agent.py#L1126)
`
`
Can not find the latest message (main.py code which I type as input), so it can't summarise the message.
To Reproduce
Steps to reproduce the behavior:
at (https://github.com/cpacker/MemGPT/blob/7d6aa4096d5118abc1d9bc2c9a298ed88a9e0da8/memgpt/main.py#L391C5-L391C28)
Expected behavior
The overlength text should be trunc and summarized as acceptable pices of text and saving them.
Screenshots
How did you install MemGPT?
git clone [email protected]:cpacker/MemGPT.git
andpip install -r requirements.txt
Your setup (please complete the following information)
The text was updated successfully, but these errors were encountered: