-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature: Custom benchmark tags #267
Comments
I think that is a good direction for packaging of parameterised benchmarks - we currently do it by changing the benchmark description string, e.g.: let benchmarks = {
let parameterization = (0...5).map { 1 << $0 } // 1, 2, 4, ...
parameterization.forEach { count in
Benchmark("ParameterizedWith\(count)") { benchmark in
for _ in 0 ..< count {
blackHole(Int.random(in: benchmark.scaledIterations))
}
}
}
} but to give an exporter access to better structured information of the parameterization in use as you suggest would be nicer and avoid string parsing. Some considerations/questions would be:
Overall I think it would be a nice improvement! |
This makes sense. What are the odds we could just give each Benchmark a default unique uuid? User doesn't have to explicitly provide one (or maybe it's not even parameterized)? I assume there would just need to be a mapping back to the string-based name - which with your suggestion below modifying the textual output would definitely be useful in disambiguating when outputting a description of the benchmark.
Love this.
It's a good question. Crossed my mind while writing the example code above.
Thoughts?
Sure - this is excellent feedback as one of my concerns was how this feature would fit within the ecosystems of the other exporters. For example, I want to do a bit of randomization in my Benchmarking but want to record the results of that randomization along with the test. Specifically, the randomized values would be a double between 0.0 and 1.0 (a percentage). This wouldn't fit well as a tag because there are infinite values between 0.0 and 1.0 - which is where a field is more useful. Do you have any recommendations or preferences about a generalization around the notion of "parameterizations" with finite ranges vs infinite ranges. |
I want to test the waters on whether the community would be receptive to a PR that adds the ability to apply custom tags to a benchmark.
This specifically targets a use case with Influx where I want to run the same benchmark multiple times under different constraints/scenarios.
For example:
This would allow me to get better dimensionality on a benchmark that, when exported to Influx, is more easily queried etc.
This would require some amendments to the Influx format exporter to include these custom tags for the particular benchmark.
We (myself and/or @ORyanHampton) are happy to take this PR on - just don't want to spend time on it if it's not a direction that the community finds appropriate.
The text was updated successfully, but these errors were encountered: