Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Results Guidelines for unverified claims and misrepresentation of verified results #137

Open
wants to merge 12 commits into
base: master
Choose a base branch
from
Open
8 changes: 6 additions & 2 deletions MLPerf_Results_Messaging_Guidelines.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,10 @@ If you used an MLPerf™ benchmark to obtain a result for your product or servic

Since your results have not gone through MLCommons review and, therefore, have not been verified by MLCommons, you must indicate your results are not verified by using the term “unverified” next to each result score and by using the following language when publishing or otherwise discussing your results: “_Result not verified by MLCommons Association._” You can include this statement in a footnote, as described in Section 3 below.

If the components (e.g. HW) that substantially determine ML performance of an "unverified" score also have a verified official score (e.g. same HW with a different submission SW stack), it is required to state the official submission score of the closest available system in any public comparisons.

Note that benchmark reference code implementations often prioritize readability and should not be mistaken for performance optimized implementations (e.g. reference code performance should not be equated with "out-of-box" performance).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"reference code performance should not be equated with "out-of-box" performance" - we may want to remove this since it's possibly opinionated. I could see people making the argument that readable code is out of the box code.

Perhaps we could say, "It is not recommended to use the reference implementations to compare performance between systems." and not go into future details?

Copy link
Contributor

@DilipSequeira DilipSequeira May 4, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reference code is not typical of what you would deploy in production, and deliberately so, as all readability/perf tradeoffs are made in favor of readability - e.g. may be written for batch 1. If we want people to be able to use reference code to characterize stack performance "out of the box", and want such comparisons not to be misleading, then the reference code needs to be designed with that in mind, and I think that'd be a net loss. Submitters should not be reviewing reference code with the consideration "if someone runs this on my stack, is it remotely reflective of production performance?"

Also, I'm not sure of the value of recommending best practices in this document. Folks using MLPerf reference implementations to present comparisons either have a scientific intent, and will understand what they're measuring and carefully describe their methodology, or they have a marketing intent and will present their product in the best light allowed by MLCommons. One of the goals of the ToU is to minimize the level of misrepresentation by the latter group.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would agree if you're recommending to remove this completely:

"Note that benchmark reference code implementations often prioritize readability and should not be mistaken for performance optimized implementations (e.g. reference code performance should not be equated with "out-of-box" performance)."

Copy link
Contributor

@DilipSequeira DilipSequeira May 4, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Possibly a better framing of this would be something like

"Where the performance of a reference implementation is used, the main text or figure must state that the reference implementation is unoptimized and not reflective of performance in production."


== Use of MLPerf™ Benchmark for MLCommons Reviewed and Verified Results

If you used an MLPerf benchmark to obtain a result for your product or service, you submitted your result for MLCommons review, and your result was verified through such review, you may indicate your results are verified when publishing or otherwise discussing your results, by indicating your results are “verified” or “official” or by otherwise following the examples below for verified results. You may also choose to use this language: “_Result verified by MLCommons Association._” You can include this statement in a footnote, as described in Section 3 below.
Expand Down Expand Up @@ -78,7 +82,7 @@ MLPerf results may not be compared against non-MLPerf results. For example, an M
== When comparing MLPerf results, you must identify any submission differences

When comparing results the main text, table, or figure must clearly identify any difference in version, division, category, verified or unverified status, scenario or chip count (count of the compute devices executing the largest number of ops, which could be processors or accelerators). When comparing Open and Closed division results, any ways in which the Open result would not qualify as a Closed result must be identified.

**Example for Non-MLCommons Reviewed Result**:

____
Expand Down Expand Up @@ -202,7 +206,7 @@ The MLPerf mark is owned by MLCommons, and only MLCommons and its authorized lic

Any MLCommons Member, Test Partner, or third party may report a violation of these Guidelines via email to the MLCommons Executive Director (“ED”) & Working Group (“WG”) chairs of the appropriate benchmark. Upon confirming the violation in their discretion, ED & WG chairs would inform the potential violator and request remedial action. If the ED, WG chairs, and potential violator are unable to reach a mutually satisfactory conclusion, the issue can be raised in WG to seek resolution via WG vote.

A non-exhaustive list of possible remedial actions or penalties based on the degree of violation is noted below. Taking or not taking any or all actions on this list or otherwise does not constitute a waiver of any enforcement rights or other rights in the MLPerf benchmarks, software, and/or trademark.
Violating content must be taken down within first 48 hours of violation being reported. A non-exhaustive list of possible remedial actions or penalties based on the degree of violation is noted below. Taking or not taking any or all actions on this list or otherwise does not constitute a waiver of any enforcement rights or other rights in the MLPerf benchmarks, software, and/or trademark.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this be changed to within 2 business days? Also violation should be agreed upon by the relevant WG as not all reports need to be valid, right?


1. Requesting corrections to published materials in the form of marketing blog posts, journals, papers, and other media.
2. If the violation was at a public event such as a conference, the WG may direct the violator to issue a public statement to correct claims in ways that conform to these Guidelines.
Expand Down