-
Notifications
You must be signed in to change notification settings - Fork 420
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Utility for checking a metric passes the threshold? #2
Comments
This is an interesting idea! If it's possible to add this without increasing the file size too much, I agree it'd be a nice addition. (It may take a slight refactor since so much of the reporting code is shared by all metrics). |
What would happen if a result pass or not the threshold? Is it more like to flag good vs bad results? Like, this quarter we went from 10% to 35% increase in good results kind of OKR? It is def interesting to help the developer have a quick idea if the result is good enough 😄 |
Thinking about this a bit more, one downside of reporting a pass/fail value on the metric is that, if Google decides to change the metric thresholds, that may require a breaking change in this library (may, not 100% sure). We also don't yet have hard, consistent, thresholds for all of the metrics (e.g. the non-core Web Vitals like FCP and TTFB), and I'm not sure I'd want the API to differ between core and non-core vitals. |
How about a threshold set as an optional value for each metric? In the README, you can recommend the developer the optimal value to benchmark, and it still gives the developer the opportunity to change it when they prefer. We could see this as a budget for those metrics, where the developer can customize the value where they are pushing for. |
I've gone back and forth on whether we should support this, but ultimately I haven't been able to come up with an API that works well. Since the thresholds are well-documented publicly, I think it makes sense for developers who want to track threshold compliance to do so in their own code. Some of the reasons for my thinking:
|
Coming back to this issue after a few years, I think most of the reasons I decided not to implement this no longer apply, so now I think it's worth reconsidering:
In the past two years none of the published thresholds for any metrics have changed, and I don't anticipate them changing much (if at all) in the future. So this no longer seems like a reason not to have this feature.
Since I wrote this, we now do have published thresholds for all of the metrics in this library, so there would be no inconsistency between metrics interfaces.
After experimenting with a basic implementation, the size different is very small (~100 bytes), so I think it's probably nothing to worry about. |
When I first started using the library, I was a little surprised to see it didn't implement a canonical helper for determining if a metric passed the Core Web Vitals thresholds.
While our documentation on web.dev captures what the thresholds are, I can imagine there is value to a developer not needing to hardcode threshold values themselves (e.g imagine if they got this subtly wrong).
I currently implement custom scoring logic in the WIP web vitals extension, but thought I'd throw this idea over the fence to see what you thought.
I'll defer on the API shape if you think this would be helpful, but you could loosely imagine...
The text was updated successfully, but these errors were encountered: