Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The score card should indicate when checks don't support a particular language/ecosystem #74

Open
coderpatros opened this issue Nov 17, 2020 · 5 comments
Labels
kind/bug Something isn't working priority/must-do Upcoming release

Comments

@coderpatros
Copy link

The scorecard can be populated for any open source project without any work or interaction from maintainers.

Maintainers must be provided with a mechanism to correct any automated scorecard findings they feel were made in error, provide "hints" for anything we can't detect automatically, and even dispute the applicability of a given scorecard finding for that repository.

There are some checks, like fuzzing, that are very specific to particular languages.

I think any score card should make clear when a particular check doesn't natively support a language or ecosystem.

As someone who is a consumer, and a maintainer, it would be good to know that a project didn't really score zero on something. It's just that the check doesn't support that language/ecosystem vs the maintainer has gone to the effort of somehow flagging that they do something the automated checks can't pick up.

@dlorenc
Copy link
Contributor

dlorenc commented Nov 17, 2020

Thanks for the report! I think this is essentially what I was trying to capture with the "Confidence" field in the result - if we can't actually tell or if a check doesn't apply we return a low confidence score. That should be used to indicate the result should not be relied on.

Does that work, or do you think we need more info in the result?

@inferno-chromium
Copy link
Contributor

Are you only concerned about fuzzing or can other checks apply to only specific languages ?

Note that fuzzing stuff is expanding to more languages, e.g. C/C++, Rust, Swift, Golang, and Python (coming soonish with https://github.com/google/atheris release)

@coderpatros
Copy link
Author

Not fuzzing specifically. It is just a check I was using as an example.

The confidence metric probably does cover this. I just think it should be obvious that the project got a zero for something because the checks don't support the language vs they do support the language and it failed the check.

@afmarcum
Copy link
Contributor

Would this issue align with the Policy per Ecosystem work?

If not, this feature may not align with the current project focus. Please advise within the next 7 days to the contrary to determine whether this issue will be closed.

@spencerschrock
Copy link
Member

Would this issue align with the Policy per Ecosystem work?

Seems like a good fit to me

@afmarcum afmarcum added this to the Policy per Ecosystem milestone Sep 6, 2023
@afmarcum afmarcum moved this to Backlog - Bugs in Scorecard - NEW Mar 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working priority/must-do Upcoming release
Projects
Status: Backlog - Bugs
Development

No branches or pull requests

5 participants