-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error: Maximum update depth exceeded. When using useCompletion hook in nextjs, on long response on gpt-4o #1610
Comments
Same here |
Hey, I don't know if you've already solved the problem, but I managed to fix it, and maybe it can help you. My issue was that I was using
This was causing multiple updates. I solved the problem by passing
I hope this helps you. |
I also experience this issue. I only have it using the route hadler. After changing to the new rsc/ai i have not seen it. It must have something to do with the re-renders when streaming.
So after reading this message here i think i finely solved the issue... Spend so much time on it xD So before i had the chat message displayed like this here:
and then in the output return
This caused exhaustions of the max states depth. However, after changing the the structure to:
the lag issues and the maximum update depth exceeded seems to have disappeared completely. In Dev i was only able to ever get many 4 or 5 messages before chrome just gave up, now i can get pretty much as many as i want! |
I tried to reproduce the bug with @Jerry-VW Can you provide me with code to reproduce? Ideally some modification of the next/useCompletion example (which I tried w gpt-4o): |
I have this older branch of my example project where i have the exact same problem using the "useChat" and an api route. In dev environment it would always lag out and crash my browser after a few msg. In production i have not experienced the same level of lag. Here it is on par with other chatbots i have tried. So after maybe 20 msg the UI can begin to lock up. |
I discovered that when using the streaming method, errors occur when the response message reaches a certain length. Even with streaming, no errors occur if the response message is short. I suspect that this issue arises because the component updates every time a streaming input comes in. |
I'm looking for a minimal examples because it's unclear to me whether this is an issue with useChat / useCompletion or with the other React code. @ElectricCodeGuy your examples has a lot of other code, which makes it hard to pinpoint the issue @Jerry-VW is this for a single response / completion or for a long chat? @choipd i tried to produce a very long message (max tokens) with no issues. however, i have a pretty fast machine and that might also play a role here |
I ran into this trying to render
I think the problem is React picking up on the |
Any update? |
TBH i think a lot of the previous answers are incorrect. I think it comes from long responses which cause react to rerender too many times in a row (50 is the limit) which depends entirely on the response size. Something that has temporarily fixed the issue for me has been to create a queue which chunks the stream values into length n values before joining and streaming it to the frontend. This increases the maximum response size by n times and can be adjusted as needed. Here is a simplified example from my serverAction: ...
const streamableValue = createStreamableValue("");
const streamChunks = async () => {
let queue = [];
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta.content;
if (content == null) {
continue;
}
queue.push(content);
if (queue.length >= 8) {
streamableValue.append(queue.join(""));
queue = [];
}
}
streamableValue.append(queue.join(""));
streamableValue.done();
};
streamChunks();
return { value: streamableValue.value }; I haven't hit the error since this but it doesn't mean it couldn't occur with a really large response. I assume the real solution would be to somehow 'give react a break' while streaming. |
@arnab710 if you want an update kindly prepare a minimum reproduction of the issue for the maintainers. |
This code mentions "synchronously", I wonder if instead of setting state each render we should use startTransition as that doesn't block the UI when updating.
|
For me, The markdown library was causing performance issues due to the rapid influx of data chunks, which it was unable to process in real-time (i.e. |
Having the same issue when messages is a dependency for something else. Tried wrapping it in a debounce but it loses real time streaming. I think |
@oalexdoda throttling would impact the stream consumption significantly, since we are using backpressure and the reading speed depends on the client speed. Before I move to a fix, I'd like to see a minimal reproduction that I can run myself. You mentioned that it's related to a message dependency. Would you mind putting together a minimal reproduction, either as PR or as a repo, so I can investigate? |
So i fixed this using
|
To summarize:
Considerations:
|
Still getting this error in the latest version. Is there a working manual workaround at the moment? |
I had the same issue with my project and this is how I've solved it (with react): The workaround seems to be to wrap the Message rendering component with memo This is the code I found on postgres.new import { memo } from 'react'
export type ChatMessageProps = {
message: Message
isLast: boolean
}
function ChatMessage({ message, isLast }: ChatMessageProps) {
// implementation of rendering Message
}
// Memoizing is important here - otherwise React continually
// re-renders previous messages unnecessarily (big performance hit)
export default memo(ChatMessage, (prevProps, nextProps) => {
// Always re-render the last message to fix a bug where `useChat()`
// doesn't trigger a re-render when multiple tool calls are added
// to the same message. Otherwise shallow compare.
return (
!nextProps.isLast &&
prevProps.isLast === nextProps.isLast &&
prevProps.message === nextProps.message
)
}) How the {messages.map((message, i) => (
<ChatMessage
key={message.id}
message={message}
isLast={i === messages.length - 1}
/>
))} |
I'm currently using a custom fork of It's not sustainable but it works for now. The other observation I have here is that if the component where you use Or if you Not sure if this is a React compiler issue (I'm on Next 14 still), or if a permanent fix can be baked into the SDK. Most of the time it errors on very long context conversations, and I've seen it even crash on users in production (to the application error white screen of death). |
I've tried this solution but it doesn't work, I will be waiting for the solution that provides the last reply |
|
Description
Use useCompletion from AI SDK to call gpt-4o that will have a long response in streaming mode.
It will hang the UI. Looks like its updating completion's state in a very fast pace.
Code example
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: