You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Basically, the proposal is to have a passive mode for the command line hsbencher tool. Instead of running the benchmarks it would just read its stdin for benchmark data and upload it. We already have similar things for uploading from files on disk (e.g. criterion reports)... this would do it live from stdin, something like:
And to do that we'd need expand our vocabulary of "tags" to include some kind of start and end markers:
START_BENCHMARK
PROGNAME: foo
VARIANT: x
SELFTIMED: 9.99
....
END_BENCHMARK
This would also solve our related problem of wanting to run a job via hsbencher, but report multiple, different benchmark results from a given subprocess execution. (Like an entire criterion suite.)
The text was updated successfully, but these errors were encountered:
…ark results
The idea is that a single process may now issue START_BENCHMARK/END_BENCHMARK tags
and thus report results for distinct benchmarks. These "sub-benchmarks" inherit all
the settings from the benchmark config that launched the process, but they may
override those settings by outputting tags.
Multilpe trials still work, and when each trial reports on multiple benchmarks, the
results are "zipped" together.
This is the major change that will enable #73.
Note, this commit creates a large amount of dead code. These disconnected functions
will be harvested in a dedicated future commit.
Basically, the proposal is to have a passive mode for the command line hsbencher tool. Instead of running the benchmarks it would just read its stdin for benchmark data and upload it. We already have similar things for uploading from files on disk (e.g. criterion reports)... this would do it live from stdin, something like:
And to do that we'd need expand our vocabulary of "tags" to include some kind of start and end markers:
This would also solve our related problem of wanting to run a job via hsbencher, but report multiple, different benchmark results from a given subprocess execution. (Like an entire criterion suite.)
The text was updated successfully, but these errors were encountered: