-
Notifications
You must be signed in to change notification settings - Fork 9.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
core(network-recorder): remove quic-request-finished workaround #9744
Conversation
perf-wise, since this is in the n^2 path, we get like a 20% improvement in That leaves about 20% of that total runtime spent on idle detection...we could make it O(1), but I'm not sure those last 10ms are worth it for how invasive the extra request counting would be. |
And here's the worst example I could find (in a relatively short search) of network-idle periods from the network events on master, from this PR, and vs the "true" network idle periods reconstructed after we have the full devtoolsLog and real network timestamps ( I used a hacked-together trace to visualize, forgive me :) This is loading (raw trace if you want to do some zooming...will have to load manually, timeline-viewer CORS stuff isn't working right now) |
I'm not sure I follow how it's more broken now than it ever was :) It seems like it's still working exactly as intended, just using the receipt of any response/headers as the "finished" state for QUIC requests because at the time we had no other event that could possibly represent finished. Great news that the workaround isn't necessary anymore though and we can move our idle slightly closer reality! |
ha, well I guess I didn't anticipate that possibility :) Good that we can remove it, then. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|
||
const periods = NetworkRecorder.findNetworkQuietPeriods(records, 0); | ||
assert.deepStrictEqual(periods, [ | ||
{start: 2000, end: Infinity}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we just change this to assert that there's no idle period here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we just change this to assert that there's no idle period here?
sounds good
#5256 added a workaround for #5254, where many QUIC requests never got a final
LoadingFinished
orLoadingFailed
event, so we were treating them as forever open, we never reached network quiet, and so we always hit the LH load timeout. The workaround was to check if the request was over QUIC and if there was end timing information, calling that "finished".We broke this workaround I think when we finally dropped our direct dependence on devtools-frontend and wrote our own
NetworkRequest
.NetworkRequest._recomputeTimesWithResourceTiming
takes in some preliminary timing information that may or may not be updated later, but any QUIC requests always appear finished after this happens, whether or not they end up updated later.This causes
networkidle
andnetwork-2-idle
to sometimes fire too early, which means we may be cutting off some runs before TTI would actually be possible. I'm not sure how bad the impact is in practice, however, asNetwork
protocol events LH receives are often delayed well after they actually occurrednetwork-3-idle
where one of the requests is QUIC and appears finished and we're not almostnetwork-2-idle
anywaysOriginally I was going to try to update the QUIC heuristic, but after doing a bunch of tests, I don't think the original behavior is an issue anymore. All but a tiny handful of QUIC requests I've observed have a
Network.loadingFinished
event, and every time I've encountered a case of one without a finish event (developers.google.com sometimes hits this), extendingnetworkQuietThresholdMs
was enough to make the problem go away, so I think they're just legitimately slow loads.So this PR just removes the workaround :)