You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Once #208 is complete, we will probably want to reorganize the benchmarks into tags so that they are more useful and meaningful. This issue can hopefully provide a place for discussion.
Most importantly: Are there other places where the tags are used such that changing them would be an issue? How can I identify these folks other than posting here?
For the most part, I think the existing tags are fine, though apps is perhaps a little vague and perhaps should be removed.
I would propose adding the following tags (each benchmark can have multiple tags):
Size:
workload: This would be for benchmarks that represent real world workloads. These would roll up into "one big number" that we report in places like the CPython release notes. I'm not crazy about the name of this tag. Suggestions?
feature: The opposite of a macrobenchmark, for benchmarks that test a very specific feature.
Domain:
web: Typical tasks used in server-side web development: for example, serializing/deserializing HTML, JSON, XML, l10n and i18n related things
The text was updated successfully, but these errors were encountered:
Once #208 is complete, we will probably want to reorganize the benchmarks into tags so that they are more useful and meaningful. This issue can hopefully provide a place for discussion.
Most importantly: Are there other places where the tags are used such that changing them would be an issue? How can I identify these folks other than posting here?
The currently assigned tags are:
For the most part, I think the existing tags are fine, though
apps
is perhaps a little vague and perhaps should be removed.I would propose adding the following tags (each benchmark can have multiple tags):
The text was updated successfully, but these errors were encountered: