-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance regression reported from v5.1.23 #6042
Comments
I've now setup a very basic performance testing framework. It consists of:
The repository is here: https://github.com/Jermolene/tiddlywiki-performance-test-rig At the moment, after running the tests several times I'm not seeing any performance regression between v5.1.23 and master:
Output:
The inconsistency may be because the test rig isn't measuring the correct thing. It would be helpful to dig into your wiki – would you be prepared to share it privately? |
If you aren't able to get a copy of Hubert's wiki or you want another test case, you could try my public Zettelkasten...should be able to just hit the download button and then run it through the upgrader. It has 2400 tiddlers and quite a complicated view template. I don't have time to try turning on the instrumentation and looking for numbers at the moment, but it does feel just a wee bit slower after doing an upgrade. The machine I'm on is so fast that it's hard to tell the difference sometimes. |
Thanks @sobjornstad. For your wiki, I'm seeing slightly slower timings on the prerelease than v5.1.23. To emphasise that this might be a limitation of the test rig; all it does at the moment is take the timings that are enabled by
Changing
|
Thanks for looking into this further @Jermolene and apologies for responding late. Unfortunetely, I cannot share my main wiki but could replicate my initial findings with a test pair of empty wikis, one running 5.1.23 and the other one being the empty 5.2.0-prerelease: In each wiki, I've created 5000 dummy tiddlers that only contain some lorem ipsum plaintext (no wikitext, widgets or custom macros). Overall, each wiki has 5003 ordinary tiddlers and is just over 17 megabytes in size. They should be identical in terms of user content. By running performance instrumentation manually like before with my own wiki, I've found the results to be consistent with my initial fidings (5.1.23 is on the left, 5.2.0-prerelease on the right): Saving a new tiddler
Clicking on the Recent tab
Opening Control Panel
Opening a tiddler by clicking its title as a search result (sidebar)
Clicking to edit a tiddler called "Startpage"
Opening default tiddlers (clicking the home button)
Creating a tiddler (clicking the + button on the sidebar) so that it goes into edit mode
Deleting an open tiddler in draft mode
|
Thanks @hubertgk I used the I assume the inconsistency is because your tests are all interactive, whereas the test rig is just measuring the initial rendering. I'll look at extending the test rig, but in the meantime it might be helpful if you could report the startup render times your seeing with the updated |
@hubertgk I tried some of your tests using your test wikis and I don't see a significant performance difference. Also worth nothing that having the zoomin story view active can confound testing as the number of open tiddlers can vary without one realizing it. Clicking on the Recent tab Opening Control Panel Opening a tiddler by clicking its title as a search result (sidebar) Clicking to edit a tiddler called "Startpage" Opening default tiddlers (clicking the home button) Creating a tiddler (clicking the + button on the sidebar) so that it goes into edit mode |
With 5.1.23Startup:
Clicking the Home button (opening default tiddlers):
Clicking the plus button (new tiddler):
5.2.0-prereleaseStartup:
Clicking the Home button (opening default tiddlers):
Clicking the plus button (new tiddler):
Is it then recommended to open as many tiddlers as possible at startup to improve overal performance (at the cost of the longer startup time)? |
No, not at all. I'm just opening lots of tiddlers to make the startup time longer, and hence easier to measure. |
Thanks @saqimtiaz
Fair point, however I'm aware of this. In my tests, I performed the exact same clicks on both test wikis so that the same tiddlers remain open, the same tabs are selected etc. Again, not really an issue on desktop but the lag is becoming a UX problem on mobile (it's there that I've first noticed it). I'm not quite sure why we're getting inconsistent readings. |
I think it may be also important to check the browser vendors. It seems Hubert uses FF which I also use. I can confirm that there are differences: Switch between Open and Recent tab 5.1.23 vs 5.2.0 |
Thanks @pmario. Puppeteer supports Firefox now so I'll have a look at adding it to the rig. |
@pmario @hubertgk here is a patched 5.2.0 file for testing: https://saqimtiaz.github.io/sq-tw/temp/5.2.0-prerelease-patched.html Can you please confirm if the performance issue persists here, especially with filter execution? |
@saqimtiaz 5.2.0-prerelease - old sort code prior to #dbd3f83 |
@pmario thank you. So that change to the sorting is definitely the major contributor to the slowdown in filter evaluation. The question remains whether that resolves all performance issues or whether there might be other changes that are also contributing. |
I don't know how feasible this would be, but would it make sense to track global/per-tiddler cache hit/miss ratios, or somehow reflect how long potentially cached operations are taking during each step? I imagine that might influence timings from startup vs timings from after the wiki has had a chance to warm up a bit! |
I was looking at https://github.com/Jermolene/TiddlyWiki5/blob/1ddc3ab037bab8cefa0b0173367335ea31d665a5/core/modules/startup/render.js#L80-L104 and I think I found a bug - if only throttled tiddlers have changed, the actual refresh will happen in another execution of the event loop, but the function provided to |
Right. But I think that isn't a big problem. The |
I think, this would be interesting. ... Especially since the global cache is destroyed whenever any tiddler is changed. And this basically happens with every user interaction. |
Thanks @saqimtiaz @pmario @hoelzro I've updated the test rig to properly measure sorting by introducing the following test tiddler:
Using the data from Soren's wiki, I get the following timings:
So, it would appear that dbd3f83 is masking what would otherwise be a doubling in performance on this test. I will now use the test rig to investigate alternative fixes to #5701
Thanks @hoelzro I think you're correct, would you mind opening a separate issue? |
I've just pushed a simple fix that brings the master execution time into line with v5.1.23 – the test rig gives roughly the same timings for each now.
I can't duplicate that result now, I think that I was testing the wrong wiki in that instance. |
@Jermolene .. This function is used very often in the core. .. So if we can make it faster, it would be an improvement. right? I saw So I think they are redundant. ... Just a thought |
That's right, I'm investigating some simple optimisations now |
I think titles.sort() .. could start like this: Init from x and y moved up from the middle of the code to the init area.
The else path isn't needed, since a and b are initialized already.
|
I'm preparing a PR with some simple optimisations and unrolling of the sort function. The timings for Soren's wiki are promising:
(575c233 is the last commit before reverting the problematic commit dbd3f83) |
The PR for optimising tiddler sorting is #6053. |
@Jermolene Done: #6054 |
From https://talk.tiddlywiki.org/t/tw-5-2-0-prerelease-significantly-slower-than-tw-5-1-23-instrumentation-data-included/776:
I’ve tested the 5.2.0-prerelease over the past few days both on mobile and on desktop.
I’ve noticed significantly degraded performance on mobile and did some performance instrumentation tests on my desktop. Results below.
I’ve primarily tested the basic TW operations such as navigating links (opening, closing tiddlers), selecting checkboxes, radio buttons, clicking buttons etc.
The overall performance is significantly slower in 5.2.0 compared to the same wiki running on 5.1.23. It appears that mainRenderer and mainRefresh see the biggest slowdowns.
Here’s some data from the two identical wikis running side by side, the figures on the left correspond to the 5.1.23 version and the ones on the right are from the 5.2.0-prerelease. In each “test run” I had all tiddlers closed beforehand and then performed identical navigation:
mainRenderer values
Opening tiddlers (variable levels of complexity)
151 ms vs 276ms
149 ms vs 230ms
242 ms vs 302ms
105 ms vs 196ms
398 ms vs 503ms
Opening Control Panel
90ms vs 190ms
Opening Advanced Search
83ms vs 195ms
Clicking on a checkbox that sets a value in a data tiddler
67ms vs 160ms
Pressing a button that sets a value in a data tiddler
175mm vs 278ms
Entering a password in an encrypted wiki, upon pressing enter:
mainRender (v5.1.23): 226.00m vs mainRender (v.5.2.0-prerelease): 350.00ms
mainRefresh (v5.1.23): 74.00ms vs mainRefresh (v.5.2.0-prerelease): 192.00ms
I’ve performed these tests on my desktop with an 8-Core Ryzen CPU clocked at 3.60 GHz and 32GB RAM. The performance on my midrange Motorola is at least 5 times slower. While desktop performance slowdowns on this system are negligible, mobile performance introduces major usability issues.
The values above are obviously system dependent and wiki dependent but the point I’m trying to make is that in relative terms, 5.2.0-prerelease is significantly slower than 5.1.23. I’ve made these tests on the same system, the same identical wiki, with the only difference being the TiddlyWiki version.
I’d like to hear your thoughts about this and also if there’s anything you could suggest. Could anyone perhaps perform similar tests on their end?
One point to note is that the wiki I used was password protected, which means that the new json parsing probably does not affect it yet (unless I misunderstood). However, the core render modules are still slower to render the wiki and it would be interesting to learn why it should be the case.
The text was updated successfully, but these errors were encountered: