-
Notifications
You must be signed in to change notification settings - Fork 361
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create summary signature hash for benchmark tests without suite-level value. #7889
base: master
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -176,9 +176,11 @@ def _load_perf_datum(job: Job, perf_datum: dict): | |
suite_extra_options = _order_and_concat(suite['extraOptions']) | ||
summary_signature_hash = None | ||
|
||
# if we have a summary value, create or get its signature by all its subtest | ||
# properties. | ||
if suite.get('value') is not None: | ||
# If we have a summary value, create or get its signature by all its subtest | ||
# properties. For benchmarks, we need to get a summary signature hash so that | ||
# the series doesn't reset when we start moving away from the summary value, | ||
# and start including that value as it's own subtest with replicates. | ||
if suite.get('value') is not None or suite.get("type") == "benchmark": | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I am wondering if it's possible to have summary signature hash for a benchmark test with a suite-level value. After removing the value from a benchmark isn't the function There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. No it won't create an additional one because the value is never added to the |
||
# summary series | ||
summary_properties = {'suite': suite['name']} | ||
summary_properties.update(reference_data) | ||
|
@@ -214,20 +216,21 @@ def _load_perf_datum(job: Job, perf_datum: dict): | |
}, | ||
) | ||
|
||
(suite_datum, datum_created) = PerformanceDatum.objects.get_or_create( | ||
repository=job.repository, | ||
job=job, | ||
push=job.push, | ||
signature=signature, | ||
push_timestamp=deduced_timestamp, | ||
defaults={'value': suite['value'], 'application_version': application_version}, | ||
) | ||
if suite_datum.should_mark_as_multi_commit(is_multi_commit, datum_created): | ||
# keep a register with all multi commit perf data | ||
MultiCommitDatum.objects.create(perf_datum=suite_datum) | ||
if suite.get("value", None) is not None: | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. From this I understand that for benchmark tests without a suite-level value there won't have performance datums? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. That's correct, for benchmarks without a suite-level value, there won't be a suite-level PerformanceDatum created for it. However, the subtests will still have a PerformanceDatum created for them since that happens on line 235, outside of this |
||
(suite_datum, datum_created) = PerformanceDatum.objects.get_or_create( | ||
repository=job.repository, | ||
job=job, | ||
push=job.push, | ||
signature=signature, | ||
push_timestamp=deduced_timestamp, | ||
defaults={'value': suite['value'], 'application_version': application_version}, | ||
) | ||
if suite_datum.should_mark_as_multi_commit(is_multi_commit, datum_created): | ||
# keep a register with all multi commit perf data | ||
MultiCommitDatum.objects.create(perf_datum=suite_datum) | ||
|
||
if _suite_should_alert_based_on(signature, job, datum_created): | ||
generate_alerts.apply_async(args=[signature.id], queue='generate_perf_alerts') | ||
if _suite_should_alert_based_on(signature, job, datum_created): | ||
generate_alerts.apply_async(args=[signature.id], queue='generate_perf_alerts') | ||
|
||
for subtest in suite['subtests']: | ||
subtest_properties = {'suite': suite['name'], 'test': subtest['name']} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't had tthe chance to work in this area yet, so I have some questions. What do you mean by resetting? Having a different series with another signature, instead of belonging to the initial series with the initial signature?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes that's right, if the suite value gets removed, the benchmark data for the subtests would become a new series which is not something we would want. It'll also prevent us from migrating the "overall" suite-level score into a subtest.