-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Performance] Typing in search is very laggy for big accounts #46591
Comments
Auto-assigning issues to engineers is no longer supported. If you think this issue should receive engineering attention, please raise it in #whatsnext. |
cc @sakluger (feel free to assign me as I (or someone from my team) will work on this ticket!) |
Just started working on this issue. I can see that I can kinda reproduce the lag if I enable lower performance settings in chrome (basically bringing my Cpu closer to the one the customer had). Working on ideas for improving the performance here! |
Created a PR that adds performance timings, so we can track upcoming performance improvements better: |
Okay, I have a proposal for a solution moving forward: Highlevel solution: use a trie structure for searchRight now our search data structure + search algorithm aren't really efficient. We loop over every item performing multiple comparison operations. We can make this way more efficient by using a trie structure and prefix search. Performance gainsI ran a test with the customers onyx data, right now only focusing on searching through the personal details.
So, right now this user on mobile will have a dead JS thread for ~300ms when typing in the search (dropping 18 frames), while with a trie, its just ~0.5ms, so no frames dropped really.
Changes needed / roadmap
CaveatsRight now building the trie takes some time and a fair amount of memory. I want to experiment a bit more using other DS such as generalised suffix tree or suffix arrays, and see which one works best! |
|
The solution for this issue has been 🚀 deployed to production 🚀 in version 9.0.17-2 and is now subject to a 7-day regression period 📆. Here is the list of pull requests that resolve this issue: If no regressions arise, payment will be issued on 2024-08-14. 🎊 For reference, here are some details about the assignees on this issue:
|
Started a discussion here how we want to move forward technically: https://expensify.slack.com/archives/C05LX9D6E07/p1723209631007999 |
No payment required for the above PR: |
I'm a bit confused about the status of the PR as well as how to handle payment. The original PR (#48652) was merged, then reverted, then we reverted the revert. So that means that the changes are in fact live? I see @ahmedGaber93 and @allgandalf reviewed the PR. Do they both require payment? And for those payments, should we decrease payment because of the regressions? Or is it unclear what caused those regressions? To summarize, I'd love it someone could help me figure out who needs to be paid and how much 😄 |
Currently, the PR was reverted again due to this regression #51584. No payment is needed into this moment. |
If you are the assigned CME please investigate whether the linked PR caused a regression and leave a comment with the results. If a regression has occurred and you are the assigned CM follow the instructions here. If this regression could have been avoided please consider also proposing a recommendation to the PR checklist so that we can avoid it in the future. |
|
The solution for this issue has been 🚀 deployed to production 🚀 in version 9.0.54-11 and is now subject to a 7-day regression period 📆. Here is the list of pull requests that resolve this issue: If no regressions arise, payment will be issued on 2024-11-05. 🎊 For reference, here are some details about the assignees on this issue:
|
Okay I'll hold off on payment and ask again when it seems like things have settled a bit. |
New PR is available here for review: |
Payment Summary
BugZero Checklist (@sakluger)
|
Removing the payment hold and updating the title since it doesn't seem like we're ready for payment quite yet. |
If you haven’t already, check out our contributing guidelines for onboarding and email [email protected] to request to join our Slack channel!
What performance issue do we need to solve?
On a very big account typing in search for the first time is incredibly laggy, see recording. The user is continuously trying to type but the application is dropping so many frames that the text is only updated in jumps:
slow_search.mov
What is the impact of this on end-users?
Very slow and frustrating search experience. Borderline functional.
List any benchmarks that show the severity of the issue
The customer shared a profile trace with us:
Firefox 2024-07-25 10.42 profile.json.gz
(note: the trace also contains other test cases as well)
The customer had ~15k reports loaded in onyx when these tests were conducted, although in focus mode only 6 chats were displayed.
Proposed solution (if any)
None yet, I will go through the profile and see what can be optimised, what exactly caused those lags.
List any benchmarks after implementing the changes to show impacts of the proposed solution (if any)
not available yet
Platforms:
Which of our officially supported platforms is this issue occurring on?
Version Number: v9.0.11-5
Reproducible in staging?: not tested
Reproducible in production?: yes
Email or phone of affected tester (no customers): customer
Logs: See performance file
Notes/Photos/Videos: See attached video
Expensify/Expensify Issue URL: n/a
Issue reported by:@hannojg
Slack conversation:https://expensify.slack.com/archives/C05LX9D6E07/p1721919928992729
View all open jobs on Upwork
Issue Owner
Current Issue Owner: @saklugerThe text was updated successfully, but these errors were encountered: