Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: Maximum update depth exceeded. When using useCompletion hook in nextjs, on long response on gpt-4o #1610

Closed
Jerry-VW opened this issue May 16, 2024 · 26 comments · May be fixed by #2257
Closed
Assignees
Labels
ai/ui bug Something isn't working

Comments

@Jerry-VW
Copy link

Description

Use useCompletion from AI SDK to call gpt-4o that will have a long response in streaming mode.
It will hang the UI. Looks like its updating completion's state in a very fast pace.

Unhandled Runtime Error
Error: Maximum update depth exceeded. This can happen when a component repeatedly calls setState inside componentWillUpdate or componentDidUpdate. React limits the number of nested updates to prevent infinite loops.

Call Stack
throwIfInfiniteUpdateLoopDetected
node_modules/next/dist/compiled/react-dom/cjs/react-dom.development.js (26607:0)
getRootForUpdatedFiber
node_modules/next/dist/compiled/react-dom/cjs/react-dom.development.js (7667:0)
enqueueConcurrentRenderForLane
node_modules/next/dist/compiled/react-dom/cjs/react-dom.development.js (7589:0)
forceStoreRerender
node_modules/next/dist/compiled/react-dom/cjs/react-dom.development.js (11893:0)
handleStoreChange
node_modules/next/dist/compiled/react-dom/cjs/react-dom.development.js (11872:0)
eval
node_modules/ai/node_modules/swr/core/dist/index.mjs (141:0)
Array.setter
node_modules/ai/node_modules/swr/_internal/dist/index.mjs (408:0)
eval
node_modules/ai/node_modules/swr/_internal/dist/index.mjs (99:0)
mutateByKey
node_modules/ai/node_modules/swr/_internal/dist/index.mjs (353:0)
internalMutate
node_modules/ai/node_modules/swr/_internal/dist/index.mjs (261:0)
eval
node_modules/ai/node_modules/swr/core/dist/index.mjs (367:29)
setCompletion
node_modules/ai/react/dist/index.mjs (1057:0)
callCompletionApi
node_modules/ai/react/dist/index.mjs (970:0)

Code example

No response

Additional context

No response

@Jerry-VW Jerry-VW changed the title Error: Maximum update depth exceeded. When using completion on long response on gpt-4o Error: Maximum update depth exceeded. When using useCompletion hook in nextjs, on long response on gpt-4o May 16, 2024
@lgrammel lgrammel added bug Something isn't working ai/rsc and removed bug Something isn't working labels May 16, 2024
@rossanodr
Copy link

Same here

@rossanodr
Copy link

rossanodr commented May 17, 2024


Hey, I don't know if you've already solved the problem, but I managed to fix it, and maybe it can help you.

My issue was that I was using message.content in a map.

messages.map((m, i) => (
    <ChatMessage
        content={m.content}
    />
))

This was causing multiple updates.

I solved the problem by passing messages directly to the final component:

<ChatMessage content={messages} />

I hope this helps you.

@ElectricCodeGuy
Copy link

ElectricCodeGuy commented May 24, 2024

I also experience this issue. I only have it using the route hadler. After changing to the new rsc/ai i have not seen it. It must have something to do with the re-renders when streaming.

Sure! Here’s the translation to English:

Hey, I don't know if you've already solved the problem, but I managed to fix it, and maybe it can help you.

My issue was that I was using message.content in a map.

messages.map((m, i) => (
    <ChatMessage
        content={m.content}
    />
))

This was causing multiple updates.

I solved the problem by passing messages directly to the final component:

<ChatMessage content={messages} />

I hope this helps you.

So after reading this message here i think i finely solved the issue... Spend so much time on it xD

So before i had the chat message displayed like this here:

const ChatMessage: FC<ChatMessageProps> = ({ messages }) => {
  const [isCopied, setIsCopied] = useState(false);
  const router = useRouter();
  const componentsAI: Partial<Components> = {
    a: ({ href, children }) => (
      <a
        href={href}
        onClick={(e) => {
          e.preventDefault();
          if (href) {
            router.push(href);
          }
        }}
      >
        {children}
      </a>
    ),
    code({ className, children, ...props }) {
      const match = /language-(\w+)/.exec(className || '');
      const language = match && match[1] ? match[1] : '';
      const inline = !language;
      if (inline) {
        return (
          <code className={className} {...props}>
            {children}
          </code>
        );
      }

      return (
        <div
          style={{
            position: 'relative',
            borderRadius: '5px',
            padding: '20px',
            marginTop: '20px',
            maxWidth: '100%'
          }}
        >
          <span
            style={{
              position: 'absolute',
              top: '0',
              left: '5px',
              fontSize: '0.8em',
              textTransform: 'uppercase'
            }}
          >
            {language}
          </span>
          <div
            style={{
              overflowX: 'auto',
              maxWidth: '1100px'
            }}
          >
            <pre style={{ margin: '0' }}>
              <code className={className} {...props}>
                {children}
              </code>
            </pre>
          </div>
        </div>
      );
    }
  };

  const componentsUser: Partial<Components> = {
    a: ({ href, children }) => (
      <a
        href={href}
        onClick={(e) => {
          e.preventDefault();
          if (href) {
            router.push(href);
          }
        }}
      >
        {children}
      </a>
    )
  };
  const copyToClipboard = (str: string): void => {
    void window.navigator.clipboard.writeText(str);
  };

  const handleCopy = (content: string) => {
    copyToClipboard(content);
    setIsCopied(true);
    setTimeout(() => setIsCopied(false), 1000);
  };

  return (
    <>
      {messages.map((m, index) => (
        <ListItem
          key={`${m.id}-${index}`}
          sx={
            m.role === 'user'
              ? messageStyles.userMessage
              : messageStyles.aiMessage
          }
        >
          <Box
            sx={{
              position: 'absolute',
              top: '10px',
              left: '10px'
            }}
          >
            {m.role === 'user' ? (
              <PersonIcon sx={{ color: '#4caf50' }} />
            ) : (
              <AndroidIcon sx={{ color: '#607d8b' }} />
            )}
          </Box>
          {m.role === 'assistant' && (
            <Box
              sx={{
                position: 'absolute',
                top: '5px',
                right: '5px',
                cursor: 'pointer',
                display: 'flex',
                alignItems: 'center',
                justifyContent: 'center',
                width: 24,
                height: 24
              }}
              onClick={() => handleCopy(m.content)}
            >
              {isCopied ? (
                <CheckCircleIcon fontSize="inherit" />
              ) : (
                <ContentCopyIcon fontSize="inherit" />
              )}
            </Box>
          )}
          <Box sx={{ overflowWrap: 'break-word' }}>
            <Typography
              variant="caption"
              sx={{ fontWeight: 'bold', display: 'block' }}
            >
              {m.role === 'user' ? 'You' : 'AI'}
            </Typography>
            {m.role === 'user' ? (
              <ReactMarkdown
                components={componentsUser}
                remarkPlugins={[remarkGfm, remarkMath]}
                rehypePlugins={[rehypeHighlight]}
              >
                {m.content}
              </ReactMarkdown>
            ) : (
              <ReactMarkdown
                components={componentsAI}
                remarkPlugins={[remarkGfm, remarkMath]}
                rehypePlugins={[[rehypeHighlight, highlightOptionsAI]]}
              >
                {m.content}
              </ReactMarkdown>
            )}
          </Box>
        </ListItem>
      ))}
    </>
  );
};

and then in the output return

 <List
          sx={{
            marginBottom: '120px'
          }}
        >
          <ChatMessage messages={messages} />
</List>

This caused exhaustions of the max states depth.

However, after changing the the structure to:

const MemoizedMessage = memo(({ message }: { message: Message }) => {
  const [isCopied, setIsCopied] = useState(false);
  const router = useRouter();
  const componentsAI: Partial<Components> = {
    a: ({ href, children }) => (
      <a
        href={href}
        onClick={(e) => {
          e.preventDefault();
          if (href) {
            router.push(href);
          }
        }}
      >
        {children}
      </a>
    ),
    code({ className, children, ...props }) {
      const match = /language-(\w+)/.exec(className || '');
      const language = match && match[1] ? match[1] : '';
      const inline = !language;
      if (inline) {
        return (
          <code className={className} {...props}>
            {children}
          </code>
        );
      }

      return (
        <div
          style={{
            position: 'relative',
            borderRadius: '5px',
            padding: '20px',
            marginTop: '20px',
            maxWidth: '100%'
          }}
        >
          <span
            style={{
              position: 'absolute',
              top: '0',
              left: '5px',
              fontSize: '0.8em',
              textTransform: 'uppercase'
            }}
          >
            {language}
          </span>
          <div
            style={{
              overflowX: 'auto',
              maxWidth: '1100px'
            }}
          >
            <pre style={{ margin: '0' }}>
              <code className={className} {...props}>
                {children}
              </code>
            </pre>
          </div>
        </div>
      );
    }
  };

  const componentsUser: Partial<Components> = {
    a: ({ href, children }) => (
      <a
        href={href}
        onClick={(e) => {
          e.preventDefault();
          if (href) {
            router.push(href);
          }
        }}
      >
        {children}
      </a>
    )
  };
  const copyToClipboard = (str: string): void => {
    void window.navigator.clipboard.writeText(str);
  };

  const handleCopy = (content: string) => {
    copyToClipboard(content);
    setIsCopied(true);
    setTimeout(() => setIsCopied(false), 1000);
  };

  return (
    <ListItem
      sx={
        message.role === 'user'
          ? messageStyles.userMessage
          : messageStyles.aiMessage
      }
    >
      <Box
        sx={{
          position: 'absolute',
          top: '10px',
          left: '10px'
        }}
      >
        {message.role === 'user' ? (
          <PersonIcon sx={{ color: '#4caf50' }} />
        ) : (
          <AndroidIcon sx={{ color: '#607d8b' }} />
        )}
      </Box>
      {message.role === 'assistant' && (
        <Box
          sx={{
            position: 'absolute',
            top: '5px',
            right: '5px',
            cursor: 'pointer',
            display: 'flex',
            alignItems: 'center',
            justifyContent: 'center',
            width: 24,
            height: 24
          }}
          onClick={() => handleCopy(message.content)}
        >
          {isCopied ? (
            <CheckCircleIcon fontSize="inherit" />
          ) : (
            <ContentCopyIcon fontSize="inherit" />
          )}
        </Box>
      )}
      <Box sx={{ overflowWrap: 'break-word' }}>
        <Typography
          variant="caption"
          sx={{ fontWeight: 'bold', display: 'block' }}
        >
          {message.role === 'user' ? 'You' : 'AI'}
        </Typography>
        {message.role === 'user' ? (
          <ReactMarkdown
            components={componentsUser}
            remarkPlugins={[remarkGfm, remarkMath]}
            rehypePlugins={[rehypeHighlight]}
          >
            {message.content}
          </ReactMarkdown>
        ) : (
          <ReactMarkdown
            components={componentsAI}
            remarkPlugins={[remarkGfm, remarkMath]}
            rehypePlugins={[[rehypeHighlight, highlightOptionsAI]]}
          >
            {message.content}
          </ReactMarkdown>
        )}
      </Box>
    </ListItem>
  );
});
MemoizedMessage.displayName = 'MemoizedMessage';
const ChatMessage: FC<ChatMessageProps> = ({ messages }) => {
  return (
    <>
      {messages.map((message, index) => (
        <MemoizedMessage key={`${message.id}-${index}`} message={message} />
      ))}
    </>
  );
};

the lag issues and the maximum update depth exceeded seems to have disappeared completely. In Dev i was only able to ever get many 4 or 5 messages before chrome just gave up, now i can get pretty much as many as i want!

@lgrammel lgrammel added bug Something isn't working ai/ui and removed ai/rsc labels May 25, 2024
@lgrammel lgrammel self-assigned this May 27, 2024
@lgrammel
Copy link
Collaborator

I tried to reproduce the bug with useCompletion and gpt-4o (4096 completion tokens). However, it did not show up for me.

@Jerry-VW Can you provide me with code to reproduce? Ideally some modification of the next/useCompletion example (which I tried w gpt-4o):

@ElectricCodeGuy
Copy link

ElectricCodeGuy commented May 27, 2024

I have this older branch of my example project where i have the exact same problem using the "useChat" and an api route.

https://github.com/ElectricCodeGuy/SupabaseAuthWithSSR/tree/0643777a348d63b7d0b0260ad99e5054be1b4062

In dev environment it would always lag out and crash my browser after a few msg. In production i have not experienced the same level of lag. Here it is on par with other chatbots i have tried. So after maybe 20 msg the UI can begin to lock up.
I'm 90% convinced it have something to do with the .map function and re-renders, but i have not successfully found the root cause.

@choipd
Copy link

choipd commented May 29, 2024

I discovered that when using the streaming method, errors occur when the response message reaches a certain length. Even with streaming, no errors occur if the response message is short. I suspect that this issue arises because the component updates every time a streaming input comes in.

@lgrammel
Copy link
Collaborator

I'm looking for a minimal examples because it's unclear to me whether this is an issue with useChat / useCompletion or with the other React code.

@ElectricCodeGuy your examples has a lot of other code, which makes it hard to pinpoint the issue

@Jerry-VW is this for a single response / completion or for a long chat?

@choipd i tried to produce a very long message (max tokens) with no issues. however, i have a pretty fast machine and that might also play a role here

@bioshazard
Copy link
Contributor

I ran into this trying to render chat.messages[0].content as a prop. Fix it with a slice/map:

chat.messages.slice(0,1).map( message => <Component content={message.content} />

I think the problem is React picking up on the .content as a deep dependency to watch for updates. Running it through a map keep the reactivity on the messages maybe.

@arnab710
Copy link

Any update?

@ted-marozzi
Copy link

ted-marozzi commented Jun 20, 2024

TBH i think a lot of the previous answers are incorrect. I think it comes from long responses which cause react to rerender too many times in a row (50 is the limit) which depends entirely on the response size. Something that has temporarily fixed the issue for me has been to create a queue which chunks the stream values into length n values before joining and streaming it to the frontend. This increases the maximum response size by n times and can be adjusted as needed.

Here is a simplified example from my serverAction:

    ...
    const streamableValue = createStreamableValue("");

    const streamChunks = async () => {
      let queue = [];

      for await (const chunk of stream) {
        const content = chunk.choices[0]?.delta.content;
        if (content == null) {
          continue;
        }
        queue.push(content);
        if (queue.length >= 8) {
          streamableValue.append(queue.join(""));
          queue = [];
        }
      }

      streamableValue.append(queue.join(""));
      streamableValue.done();
    };

    streamChunks();

    return { value: streamableValue.value };

I haven't hit the error since this but it doesn't mean it couldn't occur with a really large response. I assume the real solution would be to somehow 'give react a break' while streaming.

@ted-marozzi
Copy link

@arnab710 if you want an update kindly prepare a minimum reproduction of the issue for the maintainers.

@ted-marozzi
Copy link

This code mentions "synchronously", I wonder if instead of setting state each render we should use startTransition as that doesn't block the UI when updating.

// Count the number of times the root synchronously re-renders without
// finishing. If there are too many, it indicates an infinite update loop.

A state update marked as a Transition will be interrupted by other state updates. For example, if you update a chart component inside a Transition, but then start typing into an input while the chart is in the middle of a re-render, React will restart the rendering work on the chart component after handling the input state update.

@arnab710
Copy link

For me, The markdown library was causing performance issues due to the rapid influx of data chunks, which it was unable to process in real-time (i.e. backpressure).
I resolved the lag effectively by optimizing my markdown component.

@oalexdoda
Copy link

@lgrammel

Having the same issue when messages is a dependency for something else. Tried wrapping it in a debounce but it loses real time streaming.

I thinkuseChat() (and similar streaming hooks) should have a built-in support for a prop like debounceInterval which does all the debouncing behind the scenes.

@oalexdoda
Copy link

Here's where it's coming from in case it helps. It's happening on every single component using the SDK to stream content.

image

@oalexdoda
Copy link

image

I believe this needs to be throttled inside of the SDK directly.

@lgrammel
Copy link
Collaborator

lgrammel commented Jul 8, 2024

@oalexdoda throttling would impact the stream consumption significantly, since we are using backpressure and the reading speed depends on the client speed. Before I move to a fix, I'd like to see a minimal reproduction that I can run myself. You mentioned that it's related to a message dependency. Would you mind putting together a minimal reproduction, either as PR or as a repo, so I can investigate?

@Pulseline-Tech
Copy link

Pulseline-Tech commented Jul 12, 2024

So i fixed this using use-debounce and the defaults i set seem to be the lowest numbers here to not get the error and still feel like its streaming

import { useCompletion as useCompletionAI } from 'ai/react';
import { useDebounce } from 'use-debounce';

type UseCompletionArgs = Parameters<typeof useCompletionAI>[0] & {
  delay?: number;
  maxWait?: number;
};

export const useCompletion = ({ delay = 250, maxWait = 250, ...args }: UseCompletionArgs) => {
  const { completion, ...rest } = useCompletionAI(args);

  const [debouncedCompletion] = useDebounce(completion, delay, { maxWait });

  return {
    ...rest,
    completion: debouncedCompletion
  };
};

@Pulseline-Tech
Copy link

#2257

@lgrammel
Copy link
Collaborator

lgrammel commented Jul 18, 2024

To summarize:

  • useChat / useCompletion read the stream as fast as possible
  • this generates many updates through message changes
  • these updates can overload ui rendering and need to be throttled

Considerations:

  • this needs to be handled across frameworks (react, svelte, etc) and helpers (useChat, useCompletion, useAssistant, useObject)
  • throttling should not lead to unnecessary delays (send data immediately on start, send remaining data immediately on stream close)
    • needs to "flush" when first token arrives
    • needs to "flush" when stream is closed/finished
  • throttling wait must be configurable with a reasonable default (e.g. 50ms)
  • no additional packages should be imported

@arkaydeus
Copy link

Still getting this error in the latest version. Is there a working manual workaround at the moment?

@HirotoShioi
Copy link

HirotoShioi commented Sep 28, 2024

@arkaydeus

I had the same issue with my project and this is how I've solved it (with react):

The workaround seems to be to wrap the Message rendering component with memo

This is the code I found on postgres.new

import { memo } from 'react'

export type ChatMessageProps = {
  message: Message
  isLast: boolean
}

function ChatMessage({ message, isLast }: ChatMessageProps) {
 // implementation of rendering Message
}

// Memoizing is important here - otherwise React continually
// re-renders previous messages unnecessarily (big performance hit)
export default memo(ChatMessage, (prevProps, nextProps) => {
  // Always re-render the last message to fix a bug where `useChat()`
  // doesn't trigger a re-render when multiple tool calls are added
  // to the same message. Otherwise shallow compare.
  return (
    !nextProps.isLast &&
    prevProps.isLast === nextProps.isLast &&
    prevProps.message === nextProps.message
  )
})

How the ChatMessage is being used at chat.tsx

{messages.map((message, i) => (
    <ChatMessage
           key={message.id}
           message={message}
           isLast={i === messages.length - 1}
     />
))}

@oalexdoda
Copy link

oalexdoda commented Oct 23, 2024

I'm currently using a custom fork of useChat and useCompletion that I need to update every time the ai package gets updated. It's the only way I could come up with to prevent the maximum stack depth (even after trying memo). Basically. using throttle from lodash to prevent the update callbacks from firing too often.

useChat
image

useCompletion
image

It's not sustainable but it works for now. The other observation I have here is that if the component where you use useChat throws any sort of error (i.e. some hydration error, or anything else), whether in itself or its children, the maximum stack depth will still be reached (probably because of the console.error throwing to the browser).

Or if you console.log anything at all outside of a useEffect, again - it throws the maximum stack depth.

Not sure if this is a React compiler issue (I'm on Next 14 still), or if a permanent fix can be baked into the SDK. Most of the time it errors on very long context conversations, and I've seen it even crash on users in production (to the application error white screen of death).

@jhangmez
Copy link

@arkaydeus

I had the same issue with my project and this is how I've solved it (with react):

The workaround seems to be to wrap the Message rendering component with memo

This is the code I found on postgres.new

import { memo } from 'react'

export type ChatMessageProps = {
  message: Message
  isLast: boolean
}

function ChatMessage({ message, isLast }: ChatMessageProps) {
 // implementation of rendering Message
}

// Memoizing is important here - otherwise React continually
// re-renders previous messages unnecessarily (big performance hit)
export default memo(ChatMessage, (prevProps, nextProps) => {
  // Always re-render the last message to fix a bug where `useChat()`
  // doesn't trigger a re-render when multiple tool calls are added
  // to the same message. Otherwise shallow compare.
  return (
    !nextProps.isLast &&
    prevProps.isLast === nextProps.isLast &&
    prevProps.message === nextProps.message
  )
})

How the ChatMessage is being used at chat.tsx

{messages.map((message, i) => (
    <ChatMessage
           key={message.id}
           message={message}
           isLast={i === messages.length - 1}
     />
))}

I've tried this solution but it doesn't work, I will be waiting for the solution that provides the last reply

@lgrammel
Copy link
Collaborator

lgrammel commented Nov 1, 2024

#3440

@lgrammel
Copy link
Collaborator

lgrammel commented Nov 1, 2024

@ai-sdk/[email protected] has a experimental_throttle setting for useChat and useCompletion. You can set it to a ms amount to specify the throttling interval.

@lgrammel lgrammel closed this as completed Nov 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ai/ui bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.