Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cody Web: Add high-level memoization for chat/transcript UI #5036

Merged
merged 4 commits into from
Jul 29, 2024

Conversation

vovakulikov
Copy link
Contributor

Attempt to fix https://linear.app/sourcegraph/issue/SRCH-724/prinova-found-an-issue-on-cody-web

The problem that some customers have is somewhere in the react rendering pipeline, I couldn't reproduce this problem from the issue on my machine but I noticed that even though we have useCallback and useMemo hooks in high-level components we still don't use memoization on components level for anything, this means that the whole chate (all messages) would be re-rendered if one of messages has been changed (by re-rendering I mean react reconciliation).

This PR

  • Removes props drilling from high-level components (it's easy to track dependencies graph when we don't spread the whole props object to each component)
  • Adds memo() wrappers for heavy UI components such as transcript interactions and its children UI like human input, assistant message, ..etc)

Note: I chained this PR with un-related to this PR telemetry fix just to avoid double package versions in NPM (I will merge them in sequence as they will be reviewed)

Test plan

  • Standard high-level manual checks (changes in this PR don't change anything with the current feature set, standard e2e checks should be enough to catch possible regressions)

@vovakulikov vovakulikov requested review from umpox, abeatrix and a team July 29, 2024 04:47
@vovakulikov vovakulikov self-assigned this Jul 29, 2024
@@ -48,6 +48,10 @@ export const Chat: React.FunctionComponent<React.PropsWithChildren<ChatboxProps>
}) => {
const { reload: reloadMentionProviders } = useContextProviders()
const telemetryRecorder = useTelemetryRecorder()

const transcriptRef = useRef(transcript)
transcriptRef.current = transcript
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we are updating the ref.current each time, then how is it different from using transcript var directly? Just to not have transcript as a dependency in feedbackButtonsOnSubmit = useCallback? If so, then we can simply remove it from the deps list and use eslint-disable.

I wish I had Cody in the browser to ask this question to.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we are updating the ref.current each time, then how is it different from using transcript var directly?

references don't trigger useCallback memoization, so we will still have the same function even though ref.current is changing

Just to not have transcript as a dependency in feedbackButtonsOnSubmit = useCallback?

In this case function that we're passing to useCallback will save the first trascript value in its closure scope and we won't have the most recent value when we will call this function.

contextItems: ContextItem[] | undefined
model?: Model['model']
isForFirstMessage: boolean
className?: string

/** For use in storybooks only. */
__storybook__initialOpen?: boolean
}> = ({ contextItems, model, isForFirstMessage, className, __storybook__initialOpen }) => {
}> = memo(({ contextItems, model, isForFirstMessage, className, __storybook__initialOpen }) => {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@@ -296,22 +296,32 @@ export const CodyWebChatProvider: FC<PropsWithChildren<CodyWebChatProviderProps>
[client, vscodeAPI, setLastActiveChatID]
)

const contextInfo = useMemo(
Copy link
Member

@thenamankumar thenamankumar Jul 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, React sucks! context API should be somehow doing it out of the box.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, they were discussing this at some point reactjs/rfcs#150
dai-shi even got it working via library here use-context-selecor https://github.com/dai-shi/use-context-selector?tab=readme-ov-file but I think it never got in production unfortunately

@vovakulikov vovakulikov force-pushed the vk/fix-remote-llm-models-for-cody-web branch from 8a257e4 to e9f620a Compare July 29, 2024 16:47
Base automatically changed from vk/fix-remote-llm-models-for-cody-web to main July 29, 2024 17:04
@vovakulikov vovakulikov merged commit 284ab72 into main Jul 29, 2024
19 checks passed
@vovakulikov vovakulikov deleted the vk/improve-cody-ui-rendering branch July 29, 2024 17:51
vovakulikov added a commit that referenced this pull request Oct 10, 2024
Fixes part of
https://linear.app/sourcegraph/issue/CODY-3943/optimize-performance-for-large-chat-windows-in-cody

It looks like since #5036 we've
lost memoization in a few places. As a result, our chat UI doesn't
perform well with more than 4-5 messages (depending on how long these
chat messages are)

This PR tries to improve it by fixing memoization on expensive
components (Transcript, HumanMessage, ...etc)

| Before | After |
| ------- | ------- |
| <video
src="https://github.com/user-attachments/assets/79e7b93c-ac82-4a8c-9b2e-6c712af3fb2c"
/> | <video
src="https://github.com/user-attachments/assets/223b3513-0f30-47cc-b24c-c54f4861329c"
/> |

This PR also fixes a problem with one-box intention detection;
previously, it produced a state where we didn't have a follow-up message
UI during intent resolution.

## Test plan
- Check with the React profile that we don't render components during
message response in places where we don't need to re-render anything
- Check that intent detection in one-box doesn't produce UI flashes
PriNova pushed a commit to PriNova/Cody-Community-Edition that referenced this pull request Oct 11, 2024
Fixes part of
https://linear.app/sourcegraph/issue/CODY-3943/optimize-performance-for-large-chat-windows-in-cody

It looks like since sourcegraph#5036 we've
lost memoization in a few places. As a result, our chat UI doesn't
perform well with more than 4-5 messages (depending on how long these
chat messages are)

This PR tries to improve it by fixing memoization on expensive
components (Transcript, HumanMessage, ...etc)

| Before | After |
| ------- | ------- |
| <video
src="https://github.com/user-attachments/assets/79e7b93c-ac82-4a8c-9b2e-6c712af3fb2c"
/> | <video
src="https://github.com/user-attachments/assets/223b3513-0f30-47cc-b24c-c54f4861329c"
/> |

This PR also fixes a problem with one-box intention detection;
previously, it produced a state where we didn't have a follow-up message
UI during intent resolution.

## Test plan
- Check with the React profile that we don't render components during
message response in places where we don't need to re-render anything
- Check that intent detection in one-box doesn't produce UI flashes
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants