-
Notifications
You must be signed in to change notification settings - Fork 9.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
☂️ OYB Next Steps #6442
Comments
great analysis. AIs look good!
Can we parallelize the protocol work?
yeah, very.
Yeah we're collecting this as fast as possible, so reducing categories/trace-size is the only headroom.
Brendan had some ideas on constructing the JPEG ArrayBuffers in JS, using some boring nearest neighbor logic. |
this is really great!
I believe this includes
I think you're remembering what @patrickhulce wrote and actually landed :) We can look at how v8 handles the code, but at first glance it doesn't look like it's going to get much faster. Maybe the time is actually from jpeg decoding? |
@paulirish and I looked at one or two major axe performance sinks recently (the table one? and something else) and there's definitely opportunity. axe is complicated, though, with a lot of idiosyncrasies and layered recursive calls, so it's going to take some time investment if we want to get involved. |
Ha! Apparently so! :)
I found one recent win that's pretty easy: dequelabs/axe-core#1172 (discussion has led to a diff solution, but also straightforward). After this though is probably either table stuff or color contrast. Both rather spooky. 🎃 |
Ha, yeah I realized this as soon as I started digging in, there's only <100ms headroom after the require parts. It's also quite possible this is very different in LR since locally it means hitting the filesystem to require in everything while LR is executing our browserified module functions.
Yeah that's probably true too, maybe your wasm JPEG will get us some wins here for free? |
RE: Load blank - there's a WIP branch and details about the blocker here: #3707 (comment) |
I was also thinking we could replace the time we spend loading all locale files with a single dynamic import, but good point, it will only make a difference for node/cli (maaaaybe we'd get a lazy parsing bonus but that's hard to hit) |
Revisiting this ~20 months later, I think we've addressed the key AIs here already and much has changed since that analysis. If run time becomes an important issue again we can do another round of investigations and takeaways. |
❔ ❔ ❔ Curious what OYB is? ❔ ❔ ❔
Summary
I did a quick repurposing of DZL to analyze where our time is going in aggregate. This is the top 30 average timings across ~400 runs today
Looking closer at the top 10...
Summary of Near-term AIs:
Don't run benchmark index if not automatic timing @patrickhulce (core(throttling): add option to slowdown to a device class #6162, pending)The text was updated successfully, but these errors were encountered: