-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Contextual and GC friendly user timing #86
Comments
So, you're suggesting to enable creating temporary user-held Can't you get the same effect by registering PerformanceObservers for the user timings you care about, and clear those user timing entries once you logged them? |
@yoavweis That's a good point. However, using PerformanceObservers to clear those user timing entries doesn't resolve the entry names overlapping problem on instrumenting async operations mentioned in the OP. Those async operations may overlap, like:
That's why I mentioned in the OP that user timing has to prefix entry names to compute correct measures in async operations. However, the current performance API to get the entry list doesn't work well with this pattern. We either have to get the list by So I'm suggesting creating a stand-alone user timing performance timeline for each async operation ( |
I wonder if adding an option that enables you to clear marks and measures by passing an array of That way, you'd be able to have a PerfObserver for them, write down the results for the entries you get, and then clear them by adding them to an array and passing that array to a clearing method. WDYT? |
I'm a bit confused about this statement, would you elaborate on how the approach would avoid name prefixing for |
We may also need to add a |
@yoavweiss Thanks for point that out! I'd agree this approach is more easier to be adopted and fit in the current shape of design. Before we make changes on platforms like Node.js, is there anything/process should be done on the spec? I'd be happy to take it and push it forward. |
It would be good to discuss this first at a WebPerfWG meeting, see that there are no objections from current implementers or users, and that there's appetite to make this change. The next step after that is to define those new API methods and their processing model. |
This was discussed at a WG meeting a couple of months back. Summary:
|
Considering how the Creating new Rather, adding a new field addEventListener('an_event_name', event => {
const tag = getPerformanceTag(event); // <= any string
performance.mark('mark a', { tag });
asyncOperation()
.then(() => {
performance.measure('measure a', 'mark a', { tag });
logEntries(performance.getEntriesByTag(tag));
performance.clearMarksByTag(tag);
performance.clearMeasuresByTag(tag);
});
});
const observer = new PerformanceObserver(list => {
const entries = list.getEntries();
for (let idx = 0; idx < entries.length; idx++) {
// process entries
}
});
observer.observe({ entryTags: [ "my-tag" ] }); WDYT? |
It's unclear to me what tags/namespaces would mean outside the context of user timing. (and what |
I think Tags/namespaces are uniquely important and so no concerns about this proposal to add explicit support for tagging. I especially like that this can be a shared convention in tooling, and that the extra filter in However, I wonder if the Perhaps there is a significant perf win to doing it with a explicit API and tagging would seem like a top candidate... but maybe we can do a quick perf test? If you had 1000 marks, how long does it take to get them all and filter? What is a reasonable upper limit for user timing sizes? Specifically it seems that .clearMarks/measures API right now is fairly limited, but perhaps we can add support for passing a reference to PerformanceEntry? Then you can Finally, how commonly is clearing marks and measures actually done? I know one of the use cases is to create custom timings in Developer tooling, but without reporting to performance timeline (#68)... but I think that issues needs a holistic solution. With user timing L3, you really don't need to create a bunch of marks for custom measure times... the example above could have been:
|
Discussed on Apr 28 WebPerf WG call: https://w3c.github.io/web-performance/meetings/2022/2022-04-28/index.html Summary:
|
As the status quo, the user marks and measures are globally buffered and need to be cleared manually with
performance.clearMarks
andperformance.clearMeasures
. In wild practice, this can be very awkward for applications or libraries that record repetitive series of operations: their marks and measures have to be prefixed to prevent from colliding from maybe overlapping async operations, and they must clear the marks and measures with prefixed names withperformance.clearMarks
andperformance.clearMeasures
after the operations complete.If the marks and measures are not cleared correctly, they are likely to get memory leakages. This is not a very idiom JavaScript resource management pattern. It can be better if those buffered entries can be garbage collected once their contextual async operation is not referenced anymore.
This problem stands out on Node.js significantly. Node.js provides standard-compliant user timing API to allow applications or libraries to work with Web platforms seamlessly, whilst those applications or libraries are supposed to run code repetitively on the server-side.
If the user timing can be created under a contextual performance timeline, then this specific performance timeline can be referenced from the context of these async operations, like events, requests, etc. Once the context of these operations was not used anymore, all of them can be garbage collected cleanly. And there won't need the name prefixes for entries since they are not put into the global performance timeline and won't get overlapping with each other.
Conceptual usage:
The text was updated successfully, but these errors were encountered: