-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support benchmark type fallback #3507
Support benchmark type fallback #3507
Conversation
PBENCH-1229 The intake code ensures that all dataset (even `server.archiveonly` datasets) will have a `server.benchmark` value representing the core benchmark used in the run. The visualization code uses this value to determine compatibility. However on the staging server, we have pre-existing datasets without the `server.benchmark` metadata, and the "internal server error" failure mode is unpleasant. To be a bit more friendly, this adds a wrapper that will first check the `server.benchmark` metadata directly, but failing that will check for the normal Pbench Agent `dataset.metalog.pbench.script` metadata so that older `pbench-uperf` runs will be recognized. Failing that, the wrapper falls back to "unknown" so that we'll never have to deal with a `None` value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code change looks good (except for two tiny nits which we needn't bother with); my beef is with the comment text, but we don't need to hold the merge for that, either.
benchmark = Metadata.getvalue(dataset, Metadata.SERVER_BENCHMARK) | ||
|
||
if not benchmark: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you edit this for some other reason, I would remove this blank line (e.g., you didn't put one after the other call to getvalue()
...).
benchmark = DatasetsCompare.get_benchmark_name(dataset) | ||
if not benchmark_choice: | ||
benchmark_choice = benchmark | ||
elif benchmark != benchmark_choice: | ||
raise APIAbort( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a bit of a nicety, but I would do these comparisons in the opposite order:
benchmark = DatasetsCompare.get_benchmark_name(dataset)
if benchmark != benchmark_choice:
if not benchmark_choice:
benchmark_choice = benchmark
else:
raise APIAbort(
That is, if benchmark
is equal to benchmark_choice
then we don't need to bother about anything. If benchmark_choice
is None
that just makes it unlikely that benchmark
will be equal to it, and we'll proceed to the next step, which is to decide whether to report an error or settle on the benchmark choice. So, on the first dataset, we'll still do two comparisons, but on every subsequent dataset we'll do only one. (Granted, there aren't likely to be very many datasets, so we're not likely to save very many comparisons, but....)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code is unchanged. It's even a bit inaccurate that GitHub is highlighting it as if it moved, because (in terms of my actual editing sequence) I didn't even move it: I moved the permission check that GitHub thinks stayed where it was. 😆
PBENCH-1229
The intake code ensures that all dataset (even
server.archiveonly
datasets) will have aserver.benchmark
value representing the core benchmark used in the run. The visualization code uses this value to determine compatibility.However on the staging server, we have pre-existing datasets without the
server.benchmark
metadata, and the "internal server error" failure mode is unpleasant.To be a bit more friendly, this adds a wrapper that will first check the
server.benchmark
metadata directly, but failing that will check for the normal Pbench Agentdataset.metalog.pbench.script
metadata so that olderpbench-uperf
runs will be recognized. Failing that, the wrapper falls back to "unknown" so that we'll never have to deal with aNone
value.