Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Results Guidelines for unverified claims and misrepresentation of verified results #137

Open
wants to merge 12 commits into
base: master
Choose a base branch
from
Open
16 changes: 14 additions & 2 deletions MLPerf_Results_Messaging_Guidelines.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,10 @@ If you used an MLPerf™ benchmark to obtain a result for your product or servic

Since your results have not gone through MLCommons review and, therefore, have not been verified by MLCommons, you must indicate your results are not verified by using the term “unverified” next to each result score and by using the following language when publishing or otherwise discussing your results: “_Result not verified by MLCommons Association._” You can include this statement in a footnote, as described in Section 3 below.

If the components (e.g. HW) that substantially determine ML performance of an "unverified" score also have a verified official score (e.g. same HW with a different submission SW stack), it is required to state the official submission score of the closest available system in any public comparisons.

Note that benchmark reference code implementations often prioritize readability and should not be mistaken for performance optimized implementations (e.g. reference code performance should not be equated with "out-of-box" performance).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"reference code performance should not be equated with "out-of-box" performance" - we may want to remove this since it's possibly opinionated. I could see people making the argument that readable code is out of the box code.

Perhaps we could say, "It is not recommended to use the reference implementations to compare performance between systems." and not go into future details?

Copy link
Contributor

@DilipSequeira DilipSequeira May 4, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reference code is not typical of what you would deploy in production, and deliberately so, as all readability/perf tradeoffs are made in favor of readability - e.g. may be written for batch 1. If we want people to be able to use reference code to characterize stack performance "out of the box", and want such comparisons not to be misleading, then the reference code needs to be designed with that in mind, and I think that'd be a net loss. Submitters should not be reviewing reference code with the consideration "if someone runs this on my stack, is it remotely reflective of production performance?"

Also, I'm not sure of the value of recommending best practices in this document. Folks using MLPerf reference implementations to present comparisons either have a scientific intent, and will understand what they're measuring and carefully describe their methodology, or they have a marketing intent and will present their product in the best light allowed by MLCommons. One of the goals of the ToU is to minimize the level of misrepresentation by the latter group.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would agree if you're recommending to remove this completely:

"Note that benchmark reference code implementations often prioritize readability and should not be mistaken for performance optimized implementations (e.g. reference code performance should not be equated with "out-of-box" performance)."

Copy link
Contributor

@DilipSequeira DilipSequeira May 4, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Possibly a better framing of this would be something like

"Where the performance of a reference implementation is used, the main text or figure must state that the reference implementation is unoptimized and not reflective of performance in production."


== Use of MLPerf™ Benchmark for MLCommons Reviewed and Verified Results

If you used an MLPerf benchmark to obtain a result for your product or service, you submitted your result for MLCommons review, and your result was verified through such review, you may indicate your results are verified when publishing or otherwise discussing your results, by indicating your results are “verified” or “official” or by otherwise following the examples below for verified results. You may also choose to use this language: “_Result verified by MLCommons Association._” You can include this statement in a footnote, as described in Section 3 below.
Expand Down Expand Up @@ -77,8 +81,8 @@ MLPerf results may not be compared against non-MLPerf results. For example, an M

== When comparing MLPerf results, you must identify any submission differences

When comparing results the main text, table, or figure must clearly identify any difference in version, division, category, verified or unverified status, scenario or chip count (count of the compute devices executing the largest number of ops, which could be processors or accelerators). When comparing Open and Closed division results, any ways in which the Open result would not qualify as a Closed result must be identified.
When comparing results the main text, table, or figure must clearly identify any difference in version, division, category, verified or unverified status, scenario or chip count (count of the compute devices executing the largest number of ops, which could be processors or accelerators) or submitter name. When comparing Open and Closed division results, any ways in which the Open result would not qualify as a Closed result must be identified. When making comparisons, submissions must not be portrayed as representing the performance available from an Independent Hardware Vendor (IHV) unless the submission was by that IHV.

**Example for Non-MLCommons Reviewed Result**:

____
Expand All @@ -95,6 +99,14 @@ SmartAI Corp achieved a score of 0.6 on the MLPerf™ Image Classification bench
**Required Footnote**: “[1]Result verified by MLCommons Association. MLPerf™ name and logo are trademarks of MLCommons Association in the United States and other countries. All rights reserved. Unauthorized use strictly prohibited. See www.mlcommons.org for more information.”
____

or

____
SmartAI Corp achieved a score of 1.2 on the MLPerf™ NLP benchmark using a SmartCluster with 8 chips in the Available category of Closed Division which is faster than the result of 7.2 achieved by LessSmartAI Corp with 16 chips from HardwareVendorX in the Available on-premise category of Closed Division.[1]

**Required Footnote**: “[1]Result verified by MLCommons Association. MLPerf™ name and logo are trademarks of MLCommons Association in the United States and other countries. All rights reserved. Unauthorized use strictly prohibited. See www.mlcommons.org for more information.”
____

Furthermore, a comparison of an unverified result with a verified result must include the following statement in a footnote: “_Unverified results have not been through an MLPerf™ review and may use measurement methodologies and/or workload implementations that are inconsistent with the MLPerf™ specification for verified results._”

**Example (applicable to Non-MLCommons Reviewed Result)**:
Expand Down