-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update Results Guidelines for unverified claims and misrepresentation of verified results #137
base: master
Are you sure you want to change the base?
Changes from all commits
da411b5
43ef60d
d709b07
9dac733
3ba70cb
793199b
86b733f
729d5c3
6822be6
3ad0d10
7484070
75439b9
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -12,6 +12,10 @@ If you used an MLPerf™ benchmark to obtain a result for your product or servic | |
|
||
Since your results have not gone through MLCommons review and, therefore, have not been verified by MLCommons, you must indicate your results are not verified by using the term “unverified” next to each result score and by using the following language when publishing or otherwise discussing your results: “_Result not verified by MLCommons Association._” You can include this statement in a footnote, as described in Section 3 below. | ||
|
||
If the components (e.g. HW) that substantially determine ML performance of an "unverified" score also have a verified official score (e.g. same HW with a different submission SW stack), it is required to state the official submission score of the closest available system in any public comparisons. | ||
|
||
Note that benchmark reference code implementations often prioritize readability and should not be mistaken for performance optimized implementations (e.g. reference code performance should not be equated with "out-of-box" performance). | ||
|
||
== Use of MLPerf™ Benchmark for MLCommons Reviewed and Verified Results | ||
|
||
If you used an MLPerf benchmark to obtain a result for your product or service, you submitted your result for MLCommons review, and your result was verified through such review, you may indicate your results are verified when publishing or otherwise discussing your results, by indicating your results are “verified” or “official” or by otherwise following the examples below for verified results. You may also choose to use this language: “_Result verified by MLCommons Association._” You can include this statement in a footnote, as described in Section 3 below. | ||
|
@@ -77,8 +81,8 @@ MLPerf results may not be compared against non-MLPerf results. For example, an M | |
|
||
== When comparing MLPerf results, you must identify any submission differences | ||
|
||
When comparing results the main text, table, or figure must clearly identify any difference in version, division, category, verified or unverified status, scenario or chip count (count of the compute devices executing the largest number of ops, which could be processors or accelerators). When comparing Open and Closed division results, any ways in which the Open result would not qualify as a Closed result must be identified. | ||
When comparing results the main text, table, or figure must clearly identify any difference in version, division, category, verified or unverified status, scenario or chip count (count of the compute devices executing the largest number of ops, which could be processors or accelerators) or submitter name. When comparing Open and Closed division results, any ways in which the Open result would not qualify as a Closed result must be identified. When making comparisons, submissions must not be portrayed as representing the performance available from a submitter unless the submission was by that submitter - e.g. a submission by SuperServers Inc that happened to use an accelerator from AccelCorp Ltd must not be portrayed as representing AccelCorp's performance. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. "scenario, chip count" (remove "or"). There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. "When making comparisons, submissions must not be portrayed as representing the performance available from a submitter unless the submission was by that submitter - e.g. a submission by SuperServers Inc that happened to use an accelerator from AccelCorp Ltd must not be portrayed as representing AccelCorp's performance" This seems to be a pretty fundamental change. For example, this means I cannot submit an Apple phone benchmark and make claims around what performance Apple provides -- unless I work for Apple? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Could we add qualifying language here - specific "UNVERIFIED RESULTS" ? "When making comparisons, UNVERIFIED RESULTS must not be portrayed as representing the performance available from a submitter unless the submission was by that submitter - e.g. a UNVERIFIED RESULT by SuperServers Inc that happened to use an accelerator from AccelCorp Ltd must not be portrayed as representing AccelCorp's performance" There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If you want to claim that's the performance your submission achieved when you ran your software stack on Apple hardware, that's fine, but if you used a third-party stack on a 2018 iPhone, it'd be misrepresentation to claim that's Apple's performance even if that's a verified result. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think this asks the bigger question though: is MLPerf a Hardware Benchmark, or a Hardware+Software Benchmark. For example, suppose my claim is: "We considered the following 3 software stacks for the hardware, A, B, and vendor. Our requirements prevent us from using the Vendor stack and B stack, so we had to use A stack. Therefore, comparing on software stack A, this is the performance of the Hardware X" An example of this could be TF versus PT on TPUs. It wouldn't be fair to for Google to claim TF is the only one that counts, not everyone wants to use TF. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think my question is; how can we prevent someone like Google dictating which framework we need to use to be "official"? Because they clearly have a conflict there. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't think it protects hardware more than software. The intent is to e.g. protect Google from being compared against using a cherry-picked usage of TPU somewhere that they have no control over. The same applies to software: if you want to claim that your submission is faster than another submission using ONNX-RT on T4, fine, but you should not claim you've thereby beaten Microsoft or Nvidia. Because we want MLPerf to have third party submissions on commodity stacks, but at the same time limit the potential for misleading claims based on those submissions. I appreciate that the example is specific to hardware, and perhaps we should change it to clarify that it also includes software. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Also - if we're adding language not scoped to "unverified results", should we change the title? (the title implies this is only unverified claims). There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, that makes sense - @nv-rborkar ? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Also - asked a question offline to clarify my understanding. I think I may be missing something in reading this. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. And thanks @DilipSequeira and @nv-rborkar for keeping through all these questions. You both have been very patient in working through 6 months of discussion :) |
||
|
||
**Example for Non-MLCommons Reviewed Result**: | ||
|
||
____ | ||
|
@@ -95,6 +99,14 @@ SmartAI Corp achieved a score of 0.6 on the MLPerf™ Image Classification bench | |
**Required Footnote**: “[1]Result verified by MLCommons Association. MLPerf™ name and logo are trademarks of MLCommons Association in the United States and other countries. All rights reserved. Unauthorized use strictly prohibited. See www.mlcommons.org for more information.” | ||
____ | ||
|
||
or | ||
|
||
____ | ||
SmartAI Corp achieved a score of 1.2 on the MLPerf™ NLP benchmark using a SmartCluster with 8 chips in the Available category of Closed Division which is faster than the result of 7.2 achieved by LessSmartAI Corp with 16 chips from HardwareVendorX in the Available on-premise category of Closed Division.[1] | ||
|
||
**Required Footnote**: “[1]Result verified by MLCommons Association. MLPerf™ name and logo are trademarks of MLCommons Association in the United States and other countries. All rights reserved. Unauthorized use strictly prohibited. See www.mlcommons.org for more information.” | ||
____ | ||
|
||
Furthermore, a comparison of an unverified result with a verified result must include the following statement in a footnote: “_Unverified results have not been through an MLPerf™ review and may use measurement methodologies and/or workload implementations that are inconsistent with the MLPerf™ specification for verified results._” | ||
|
||
**Example (applicable to Non-MLCommons Reviewed Result)**: | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"reference code performance should not be equated with "out-of-box" performance" - we may want to remove this since it's possibly opinionated. I could see people making the argument that readable code is out of the box code.
Perhaps we could say, "It is not recommended to use the reference implementations to compare performance between systems." and not go into future details?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reference code is not typical of what you would deploy in production, and deliberately so, as all readability/perf tradeoffs are made in favor of readability - e.g. may be written for batch 1. If we want people to be able to use reference code to characterize stack performance "out of the box", and want such comparisons not to be misleading, then the reference code needs to be designed with that in mind, and I think that'd be a net loss. Submitters should not be reviewing reference code with the consideration "if someone runs this on my stack, is it remotely reflective of production performance?"
Also, I'm not sure of the value of recommending best practices in this document. Folks using MLPerf reference implementations to present comparisons either have a scientific intent, and will understand what they're measuring and carefully describe their methodology, or they have a marketing intent and will present their product in the best light allowed by MLCommons. One of the goals of the ToU is to minimize the level of misrepresentation by the latter group.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would agree if you're recommending to remove this completely:
"Note that benchmark reference code implementations often prioritize readability and should not be mistaken for performance optimized implementations (e.g. reference code performance should not be equated with "out-of-box" performance)."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Possibly a better framing of this would be something like
"Where the performance of a reference implementation is used, the main text or figure must state that the reference implementation is unoptimized and not reflective of performance in production."