-
Notifications
You must be signed in to change notification settings - Fork 508
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The score card should indicate when checks don't support a particular language/ecosystem #74
Comments
Thanks for the report! I think this is essentially what I was trying to capture with the "Confidence" field in the result - if we can't actually tell or if a check doesn't apply we return a low confidence score. That should be used to indicate the result should not be relied on. Does that work, or do you think we need more info in the result? |
Are you only concerned about fuzzing or can other checks apply to only specific languages ? Note that fuzzing stuff is expanding to more languages, e.g. C/C++, Rust, Swift, Golang, and Python (coming soonish with https://github.com/google/atheris release) |
Not fuzzing specifically. It is just a check I was using as an example. The confidence metric probably does cover this. I just think it should be obvious that the project got a zero for something because the checks don't support the language vs they do support the language and it failed the check. |
Would this issue align with the Policy per Ecosystem work? If not, this feature may not align with the current project focus. Please advise within the next 7 days to the contrary to determine whether this issue will be closed. |
Seems like a good fit to me |
There are some checks, like fuzzing, that are very specific to particular languages.
I think any score card should make clear when a particular check doesn't natively support a language or ecosystem.
As someone who is a consumer, and a maintainer, it would be good to know that a project didn't really score zero on something. It's just that the check doesn't support that language/ecosystem vs the maintainer has gone to the effort of somehow flagging that they do something the automated checks can't pick up.
The text was updated successfully, but these errors were encountered: