-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update Results Guidelines for unverified claims and misrepresentation of verified results #137
base: master
Are you sure you want to change the base?
Update Results Guidelines for unverified claims and misrepresentation of verified results #137
Conversation
MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅ |
WG: Some concerns about whether 24 hours is achievable for all cases. Some questions about business days versus calendar days. Some questions about whose business day for national holidays. Some ambiguity about the definition of competitor. Some question about applying rule to submissions with performance critical components from multiple vendors. NVIDIA accelerator, AMD CPU which repository to use? How to handle disputes about 'good faith?' |
Flagging that I want a chance to speak more to this PR before we move forward deciding to merge or not. |
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think making sure people make good comparisons is important. I'd like to better understand and discuss the implications this has for "out of the box" and other comparisons. I could see some interesting comparisons that would be excluded by this as written.
I also think we could explore additional avenues to address this concern - for example better documenting the references so people don't accidentally compare using them without understanding the implications of doing that.
Thanks @tjablin , @bitfort , @TheKanter for the questions & feedback.
@TheKanter what would you propose as a sensitive but practical timeline for taking down disputed content? Note reposting w/ a fix can take longer but we should try to have a timeline for taking down violatory content as media hit cycles are short & matter only in the first few days. Note in the WG we will also be asking participants to get this reviewed by their respective marketing representatives. |
Thanks for the clarification - my question here: It is my understanding that under MLPerf rules, it is legal for anyone to submit on anyone else hardware and software, right? For example, Facebook could submit a Tensorflow + TPU submission. Google could submit a PyTorch + Intel submission. Facebook wouldn't be required to use Google's preferred or perviously submitted models; Google wouldn't be required to use any preferred software or models from Intel. If I can submit submit a new score which isn't based on a previous "official submission" - how come my unverified scores need to be based on a previous "official submission"? |
For example, to be more concrete, suppose I thought that Company X's optimizations didn't reflect what users do in practice. So I wanted to publish a blog post saying "Company X's MLPerf results if they wrote the code the way real researchers write the code" where I give an unverified MLPerf using a different implementation of Resnet50 than the one google has previously submitted. This seems like a fair point to make - because I'm saying "No one in practice writes their models like Company X writes their models for MLPerf - he's what it would really look like." While Company X may not like what I have to say, it isn't clear that they should be able to order me to take it down. My reading of this PR would allow them to take it down because I didn't use their previous official submission. As long as I am following the rules of MLPerf, I think I should be allowed to share a different opinion on what is important to users. If I think a different implementation is a better reflection of what users value - what really matters is if I'm following the rules, not if people link what I have to say. If someone can show that my unverified results are not following MLPerf rules - perhaps then a takedown may be considered in that case. But the requirement to use an "official submission" seems to stifle legitimate disagreement. I think we may want to find a way to adjust the language to separate "alternative perspectives & fair criticism" from "misleading and incorrect." Also - I may be misunderstanding here - please do correct me. |
Good questions w/ examples. Yes there is a misunderstanding :)
The updated language doesn't require the unverified score to be based on previous official submission. It requires you to "also" show the official submission score along with your unverified score in comparisons (eg blog, charts etc). This could also be good incentive for SW innovators to beat SOTA MLPerf performance.
In this example, one wouldn't be ordered to take the opinion down as long as the comparison with unverified score transparently mentions:
For example, company X has an official score on HW platform A with SW stack 1.
|
@nv-rborkar I'd like to discuss how other benchmarks handle this in private. |
@bitfort , @TheKanter I have modified the PR
Also would you like to join the Training WG meeting on 2/16 to discuss this more? |
03/09/2023 Training WG update: |
RE: Timeline for complaints and violations Generally the flow should look like:
Current guidance is that MLC can send out a letter requesting a fix within 96h for step 5. We should figure out a schedule for step 3 & 4 that is realistic given MLC resources. |
Regarding step 4, the feedback I got from internal discussions is to set timeline in working/business days rather than hours. |
I'm unit agnostic here :) 4 working days is fine.
David
…On Fri, Mar 10, 2023 at 11:01 AM drcanchi ***@***.***> wrote:
Regarding step 4, the feedback I got from internal discussions is to set
timeline in working/business days rather than hours.
—
Reply to this email directly, view it on GitHub
<#137 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AJXLOKY2WUAQE5QDD22SR6DW3N3AVANCNFSM6AAAAAASBUIMH4>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Removed violation addressing deadline from this PR as it would be more productive to discuss it in a separate PR. Added language to disallow misrepresentation of performance from 3rd party submitters.
Removed deadline for handling violations from this PR as it would be more productive to discuss it on a separate PR. Addressing only mispresentation of reference performance or 3rd party performance in this PR. Please review. |
@@ -202,7 +206,7 @@ The MLPerf mark is owned by MLCommons, and only MLCommons and its authorized lic | |||
|
|||
Any MLCommons Member, Test Partner, or third party may report a violation of these Guidelines via email to the MLCommons Executive Director (“ED”) & Working Group (“WG”) chairs of the appropriate benchmark. Upon confirming the violation in their discretion, ED & WG chairs would inform the potential violator and request remedial action. If the ED, WG chairs, and potential violator are unable to reach a mutually satisfactory conclusion, the issue can be raised in WG to seek resolution via WG vote. | |||
|
|||
A non-exhaustive list of possible remedial actions or penalties based on the degree of violation is noted below. Taking or not taking any or all actions on this list or otherwise does not constitute a waiver of any enforcement rights or other rights in the MLPerf benchmarks, software, and/or trademark. | |||
Violating content must be taken down within first 48 hours of violation being reported. A non-exhaustive list of possible remedial actions or penalties based on the degree of violation is noted below. Taking or not taking any or all actions on this list or otherwise does not constitute a waiver of any enforcement rights or other rights in the MLPerf benchmarks, software, and/or trademark. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can this be changed to within 2 business days? Also violation should be agreed upon by the relevant WG as not all reports need to be valid, right?
Hrmm I see a change that is singling out IHVs, I don't think that's a good idea. This thing has been lurking around for a long time. Is there a way we can find some non-contentious subsets and commit those? E.g., the 4 day notice should be updated. @nv-rborkar |
If you click on the "Files" diff at the top, it will show the real diff where notice period clause is removed from this PR to simplify issues being discussed. Thanks for feedback @TheKanter about singling out IHVs - modified language. |
@nv-rborkar let's discuss some of the ideas here offline. I'm not sure I understand the goals. |
LGTM from the Inference Working group. |
We have separated out the timeline one into a PR that has been accepted, this now focuses solely on allowed comparisons. |
@@ -12,6 +12,10 @@ If you used an MLPerf™ benchmark to obtain a result for your product or servic | |||
|
|||
Since your results have not gone through MLCommons review and, therefore, have not been verified by MLCommons, you must indicate your results are not verified by using the term “unverified” next to each result score and by using the following language when publishing or otherwise discussing your results: “_Result not verified by MLCommons Association._” You can include this statement in a footnote, as described in Section 3 below. | |||
|
|||
If the components (e.g. HW) that substantially determine ML performance of an "unverified" score also have a verified official score (e.g. same HW with a different submission SW stack), it is required to state the official submission score of the closest available system in any public comparisons. | |||
|
|||
Note that benchmark reference code implementations often prioritize readability and should not be mistaken for performance optimized implementations (e.g. reference code performance should not be equated with "out-of-box" performance). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"reference code performance should not be equated with "out-of-box" performance" - we may want to remove this since it's possibly opinionated. I could see people making the argument that readable code is out of the box code.
Perhaps we could say, "It is not recommended to use the reference implementations to compare performance between systems." and not go into future details?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reference code is not typical of what you would deploy in production, and deliberately so, as all readability/perf tradeoffs are made in favor of readability - e.g. may be written for batch 1. If we want people to be able to use reference code to characterize stack performance "out of the box", and want such comparisons not to be misleading, then the reference code needs to be designed with that in mind, and I think that'd be a net loss. Submitters should not be reviewing reference code with the consideration "if someone runs this on my stack, is it remotely reflective of production performance?"
Also, I'm not sure of the value of recommending best practices in this document. Folks using MLPerf reference implementations to present comparisons either have a scientific intent, and will understand what they're measuring and carefully describe their methodology, or they have a marketing intent and will present their product in the best light allowed by MLCommons. One of the goals of the ToU is to minimize the level of misrepresentation by the latter group.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would agree if you're recommending to remove this completely:
"Note that benchmark reference code implementations often prioritize readability and should not be mistaken for performance optimized implementations (e.g. reference code performance should not be equated with "out-of-box" performance)."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Possibly a better framing of this would be something like
"Where the performance of a reference implementation is used, the main text or figure must state that the reference implementation is unoptimized and not reflective of performance in production."
@@ -77,8 +81,8 @@ MLPerf results may not be compared against non-MLPerf results. For example, an M | |||
|
|||
== When comparing MLPerf results, you must identify any submission differences | |||
|
|||
When comparing results the main text, table, or figure must clearly identify any difference in version, division, category, verified or unverified status, scenario or chip count (count of the compute devices executing the largest number of ops, which could be processors or accelerators). When comparing Open and Closed division results, any ways in which the Open result would not qualify as a Closed result must be identified. | |||
|
|||
When comparing results the main text, table, or figure must clearly identify any difference in version, division, category, verified or unverified status, scenario or chip count (count of the compute devices executing the largest number of ops, which could be processors or accelerators) or submitter name. When comparing Open and Closed division results, any ways in which the Open result would not qualify as a Closed result must be identified. When making comparisons, submissions must not be portrayed as representing the performance available from a submitter unless the submission was by that submitter - e.g. a submission by SuperServers Inc that happened to use an accelerator from AccelCorp Ltd must not be portrayed as representing AccelCorp's performance. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"scenario, chip count" (remove "or").
@@ -77,8 +81,8 @@ MLPerf results may not be compared against non-MLPerf results. For example, an M | |||
|
|||
== When comparing MLPerf results, you must identify any submission differences | |||
|
|||
When comparing results the main text, table, or figure must clearly identify any difference in version, division, category, verified or unverified status, scenario or chip count (count of the compute devices executing the largest number of ops, which could be processors or accelerators). When comparing Open and Closed division results, any ways in which the Open result would not qualify as a Closed result must be identified. | |||
|
|||
When comparing results the main text, table, or figure must clearly identify any difference in version, division, category, verified or unverified status, scenario or chip count (count of the compute devices executing the largest number of ops, which could be processors or accelerators) or submitter name. When comparing Open and Closed division results, any ways in which the Open result would not qualify as a Closed result must be identified. When making comparisons, submissions must not be portrayed as representing the performance available from a submitter unless the submission was by that submitter - e.g. a submission by SuperServers Inc that happened to use an accelerator from AccelCorp Ltd must not be portrayed as representing AccelCorp's performance. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"When making comparisons, submissions must not be portrayed as representing the performance available from a submitter unless the submission was by that submitter - e.g. a submission by SuperServers Inc that happened to use an accelerator from AccelCorp Ltd must not be portrayed as representing AccelCorp's performance"
This seems to be a pretty fundamental change. For example, this means I cannot submit an Apple phone benchmark and make claims around what performance Apple provides -- unless I work for Apple?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we add qualifying language here - specific "UNVERIFIED RESULTS" ?
"When making comparisons, UNVERIFIED RESULTS must not be portrayed as representing the performance available from a submitter unless the submission was by that submitter - e.g. a UNVERIFIED RESULT by SuperServers Inc that happened to use an accelerator from AccelCorp Ltd must not be portrayed as representing AccelCorp's performance"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you want to claim that's the performance your submission achieved when you ran your software stack on Apple hardware, that's fine, but if you used a third-party stack on a 2018 iPhone, it'd be misrepresentation to claim that's Apple's performance even if that's a verified result.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this asks the bigger question though: is MLPerf a Hardware Benchmark, or a Hardware+Software Benchmark.
For example, suppose my claim is: "We considered the following 3 software stacks for the hardware, A, B, and vendor. Our requirements prevent us from using the Vendor stack and B stack, so we had to use A stack. Therefore, comparing on software stack A, this is the performance of the Hardware X"
An example of this could be TF versus PT on TPUs. It wouldn't be fair to for Google to claim TF is the only one that counts, not everyone wants to use TF.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think my question is; how can we prevent someone like Google dictating which framework we need to use to be "official"? Because they clearly have a conflict there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it protects hardware more than software. The intent is to e.g. protect Google from being compared against using a cherry-picked usage of TPU somewhere that they have no control over. The same applies to software: if you want to claim that your submission is faster than another submission using ONNX-RT on T4, fine, but you should not claim you've thereby beaten Microsoft or Nvidia.
Because we want MLPerf to have third party submissions on commodity stacks, but at the same time limit the potential for misleading claims based on those submissions.
I appreciate that the example is specific to hardware, and perhaps we should change it to clarify that it also includes software.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also - if we're adding language not scoped to "unverified results", should we change the title? (the title implies this is only unverified claims).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that makes sense - @nv-rborkar ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also - asked a question offline to clarify my understanding. I think I may be missing something in reading this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And thanks @DilipSequeira and @nv-rborkar for keeping through all these questions. You both have been very patient in working through 6 months of discussion :)
This PR mentions the reference code as though it would be the only way to generate unofficial results, but there are other ways to run: ck and MLCube. In fact, the goal of those tools is to make it MLPerf easy to run. This PR puts an extra burden on those efforts. Is this even needed, we already have a disclaimer that those results have not been verified and thus are unofficial. Plus there is no definition what a "verified official score" actually is - presumably a submission by a chip manufacturer? This works for Nvidia and Intel, but what about results where a 3rd party submission is the only one - does that make an official score? |
Jumping in to clarify - I believe official and verified is synonymous. So
an official score is one that head been obtained through submission.
…On Thu, May 4, 2023 at 12:46 AM Miro ***@***.***> wrote:
This PR mentions the reference code as though it would be the only way to
generate unofficial results, but there are other ways to run: ck and
MLCube. In fact, the goal of those tools is to make it MLPerf easy to run.
This PR puts an extra burden on those efforts. Is this even needed, we
already have a disclaimer that those results have not been verified and
thus are unofficial.
Plus there is no definition what a "verified official score" actually is -
presumably a submission by a chip manufacturer? This works for Nvidia and
Intel, but what about results where a 3rd party submission is the only one
- does that make an official score?
—
Reply to this email directly, view it on GitHub
<#137 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AJXLOKYTEPQGMZAJCW6422TXENNHDANCNFSM6AAAAAASBUIMH4>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
It is encouraged to use published official MLPerf scores for comparisons. If deriving unofficial results for competitor products or services, one should use that submitter's code repository, if it exists, instead of reference code in the spirit of making a good faith best effort.
Also added timeline (24 hours) for taking down content which violates results guidelines.