From 7ccf0e094b34b3a7563acb8632f1357a59140e75 Mon Sep 17 00:00:00 2001 From: Adam Korczynski Date: Tue, 2 Jan 2024 13:37:18 +0000 Subject: [PATCH 01/12] :book: Add documentation about probes and contributing Signed-off-by: Adam Korczynski --- CONTRIBUTING.md | 31 +++++++++++++++++++++++++++++++ probes/README.md | 48 ++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 79 insertions(+) create mode 100644 probes/README.md diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index eeec82bd6bc..69a8c5c3b65 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -16,7 +16,10 @@ project. This document describes the contribution guidelines for the project. * [How to build scorecard locally](#how-to-build-scorecard-locally) * [PR Process](#pr-process) * [What to do before submitting a pull request](#what-to-do-before-submitting-a-pull-request) +* [Changing Score Results](#changing-score-results) +* [Linting](#linting) * [Permission for GitHub personal access tokens](#permission-for-github-personal-access-tokens) +* [Adding New Probes](#adding-new-probes) * [Where the CI Tests are configured](#where-the-ci-tests-are-configured) * [dailyscore-cronjob](#dailyscore-cronjob) * [Deploying the cron job](#deploying-the-cron-job) @@ -126,6 +129,9 @@ assumed to match the PR. For instance, if you have a bugfix in with a breaking change, it's generally encouraged to submit the bugfix separately, but if you must put them in one PR, you should mark the whole PR as breaking. +When a maintainer reviews your code, it is generally preferred to solve each individual +review with small fixes without rebasing, so the maintainer can assess each fix separately. + ## What to do before submitting a pull request Following the targets that can be used to test your changes locally. @@ -133,12 +139,33 @@ Following the targets that can be used to test your changes locally. | Command | Description | Is called in the CI? | | -------- | -------------------------------------------------- | -------------------- | | make all | Runs go test,golangci lint checks, fmt, go mod tidy| yes | +| make unit-test | Runs unit tests only. Good to run often when developing locally. `make all` will also run this. | yes | +| make check-linter | Checks linter issues only. Good to run often when developing locally. `make all` will also run this. | yes | | make e2e-pat | Runs e2e tests | yes | Make sure to signoff your commits before submitting a pull request. https://docs.pi-hole.net/guides/github/how-to-signoff/ +## Changing Score Results + +As a general rule of thumb, pull requests that change Scorecard score results will need a good reason to do so to get merged. + +## Linting + +Most linter issues can be fixed with `golangcgi-lint` by following these steps: + +``` +cd /tmp +git clone https://github.com/my-fork-of/scorecard +cd /tmp +git clone --depth=1 https://github.com/golangci/golangci-lint --branch=v1.55.2 # Latest release +cd golangcgi-lint/cmd/golangcgi-lint +go build . +cd /tmp/scorecard +/tmp/golangcgi-lint/cmd/golangcgi-lint/golangcgi-lint run --fix +``` + ## Permission for GitHub personal access tokens The personal access token need the following scopes: @@ -168,6 +195,10 @@ Commit the changes, and submit a PR and scorecard would start scanning in subseq See [checks/write.md](checks/write.md). When you add new checks, you need to also update the docs. +## Adding New Probes + +See [probes/README.md](probes/README.md) for information about the probes. + ## Updating Docs A summary for each check needs to be included in the `README.md`. diff --git a/probes/README.md b/probes/README.md new file mode 100644 index 00000000000..9c08170618c --- /dev/null +++ b/probes/README.md @@ -0,0 +1,48 @@ +# Scorecard probes + +This subdirectory contains all the Scorecard probes. + +A probe is an assessment of a focused, specific problem typically isolated to a particular ecosystem. For example, Scorecards fuzzing check consists of many different probes that assess particular ecosystems or aspects of fuzzing. + +Each probe has its own directory in `scorecard/probes`. The probes follow a camelcase naming convention that describe the exact problem a particular probe assesses. + +Probes can return multiple or a single `finding.Outcome`. Probes should be designed in such a way that a `finding.OutcomePositive` reflects a positive result, and `finding.OutcomeNegative` reflects a negative result. Scorecard has other `finding.Outcome` types available for other results; For example, the `finding.OutcomeNotAvailable` is often used for scenarios, where Scorecard cannot assess a project with a given probe. In addition, probes should also be named in such a way that they answer "yes" or "no", and where "yes" answers positively to the problem, and "no" answers negatively. For example, probes that check for SAST tools in the CI are called `toolXXXInstalled` so that `finding.OutcomePositive` reflects that it is positive to use the given tool, and that "yes" reflects what Scorecard considers the positive outcome. For some probes, this can be a bit trickier to do; The `notArchived` probe checks whether a project is archived, however, Scorecard considers archived projects to be negative, and the probe cannot be called `isArchived`. These naming conventions are not hard rules but merely guidelines. Note that probes do not do any formal evaluation such a scoring; This is left to the evaluation part once the outcomes have been produced by the probes. + +A probe consists of three files: + +- `def.yml`: The documentation of the probe. +- `impl.go`: The actual implementation of the probe. +- `impl_test.go`: The probes test. + +## Reusing code in probes + +When multiple probes use the same code, the reused code can be placed on `scorecard/probes/internal/utils` + +## How do I know which probes to add? + +In general, browsing through the Scorecard GitHub issues is the best way to find new probes to add. Requests for support for new tools, fuzzing engines or other heuristics can often be converted into specific probes. + +## Probe definition formatting + +Probe definitions can display links following standard markdown format. + +Probe definitions can display dynamic content. This requires modifications in `def.yml` and `impl.go` and in the evaluation steps. + +The following snippet in `def.yml` will display dynamic data provided by `impl.go`: + +```md +${{ metadata.dataToDisplay }} +``` + +And then in `impl.go` add the following metadata: + +```golang +f, err := finding.NewWith(fs, Probe, + "Message", nil, + finding.OutcomePositive) +f = f.WithRemediationMetadata(map[string]string{ + "dataToDisplay": "this is the text we will display", +}) +``` + +To display the content to the user, Scorecard needs to print out the findings metadata somewhere in the evaluation part. This can be done with logging. \ No newline at end of file From 1e42ccc438d6e7e24d7c326f9c9a2f2a4fd6b5e7 Mon Sep 17 00:00:00 2001 From: Adam Korczynski Date: Tue, 9 Jan 2024 18:21:18 +0000 Subject: [PATCH 02/12] change 'subdirectory' to 'directory' Signed-off-by: Adam Korczynski --- probes/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/probes/README.md b/probes/README.md index 9c08170618c..0b0aa31865b 100644 --- a/probes/README.md +++ b/probes/README.md @@ -1,6 +1,6 @@ # Scorecard probes -This subdirectory contains all the Scorecard probes. +This directory contains all the Scorecard probes. A probe is an assessment of a focused, specific problem typically isolated to a particular ecosystem. For example, Scorecards fuzzing check consists of many different probes that assess particular ecosystems or aspects of fuzzing. From 43422e0bdfb521502a82e95348f203f04d9001ad Mon Sep 17 00:00:00 2001 From: Adam Korczynski Date: Tue, 9 Jan 2024 18:26:40 +0000 Subject: [PATCH 03/12] fix 'golangci' typo Signed-off-by: Adam Korczynski --- CONTRIBUTING.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 69a8c5c3b65..80817317aa1 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -153,17 +153,17 @@ As a general rule of thumb, pull requests that change Scorecard score results wi ## Linting -Most linter issues can be fixed with `golangcgi-lint` by following these steps: +Most linter issues can be fixed with `golangci-lint` by following these steps: ``` cd /tmp git clone https://github.com/my-fork-of/scorecard cd /tmp git clone --depth=1 https://github.com/golangci/golangci-lint --branch=v1.55.2 # Latest release -cd golangcgi-lint/cmd/golangcgi-lint +cd golangci-lint/cmd/golangci-lint go build . cd /tmp/scorecard -/tmp/golangcgi-lint/cmd/golangcgi-lint/golangcgi-lint run --fix +/tmp/golangci-lint/cmd/golangci-lint/golangci-lint run --fix ``` ## Permission for GitHub personal access tokens From d228d849ce5d3de153f2f99dd65871170d0d6f9d Mon Sep 17 00:00:00 2001 From: Adam Korczynski Date: Tue, 9 Jan 2024 18:28:48 +0000 Subject: [PATCH 04/12] Added 'make fix-linter' to Makefile Signed-off-by: Adam Korczynski --- CONTRIBUTING.md | 11 ++--------- Makefile | 5 +++++ 2 files changed, 7 insertions(+), 9 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 80817317aa1..117b771a0ba 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -153,17 +153,10 @@ As a general rule of thumb, pull requests that change Scorecard score results wi ## Linting -Most linter issues can be fixed with `golangci-lint` by following these steps: +Most linter issues can be fixed with `golangci-lint` with the following command: ``` -cd /tmp -git clone https://github.com/my-fork-of/scorecard -cd /tmp -git clone --depth=1 https://github.com/golangci/golangci-lint --branch=v1.55.2 # Latest release -cd golangci-lint/cmd/golangci-lint -go build . -cd /tmp/scorecard -/tmp/golangci-lint/cmd/golangci-lint/golangci-lint run --fix +make fix-linter ``` ## Permission for GitHub personal access tokens diff --git a/Makefile b/Makefile index 68decace10d..61f769401b4 100644 --- a/Makefile +++ b/Makefile @@ -94,6 +94,11 @@ check-linter: | $(GOLANGCI_LINT) # Run golangci-lint linter $(GOLANGCI_LINT) run -c .golangci.yml +fix-linter: ## Install and run golang linter, with fixes +fix-linter: | $(GOLANGCI_LINT) + # Run golangci-lint linter + $(GOLANGCI_LINT) run -c .golangci.yml --fix + add-projects: ## Adds new projects to ./cron/internal/data/projects.csv and ./cron/internal/data/gitlab-projects.csv add-projects: ./cron/internal/data/projects.csv | build-add-script # GitHub From 14e1ccfa9e1dd06b377fa744ee0951f89fcfa13d Mon Sep 17 00:00:00 2001 From: Adam Korczynski Date: Tue, 9 Jan 2024 18:40:22 +0000 Subject: [PATCH 05/12] Move commands to their own table Signed-off-by: Adam Korczynski --- CONTRIBUTING.md | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 117b771a0ba..e5d414671d1 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -139,14 +139,18 @@ Following the targets that can be used to test your changes locally. | Command | Description | Is called in the CI? | | -------- | -------------------------------------------------- | -------------------- | | make all | Runs go test,golangci lint checks, fmt, go mod tidy| yes | -| make unit-test | Runs unit tests only. Good to run often when developing locally. `make all` will also run this. | yes | -| make check-linter | Checks linter issues only. Good to run often when developing locally. `make all` will also run this. | yes | | make e2e-pat | Runs e2e tests | yes | Make sure to signoff your commits before submitting a pull request. https://docs.pi-hole.net/guides/github/how-to-signoff/ +When developing locally, the following commands are useful to run regularly to check unit tests and linting. + +| Command | Description | Is called in the CI? | +| make unit-test | Runs unit tests only. Good to run often when developing locally. `make all` will also run this. | yes | +| make check-linter | Checks linter issues only. Good to run often when developing locally. `make all` will also run this. | yes | + ## Changing Score Results As a general rule of thumb, pull requests that change Scorecard score results will need a good reason to do so to get merged. From aa2279e112e344350e91e50dd6df8e62a6ed27c9 Mon Sep 17 00:00:00 2001 From: Adam Korczynski Date: Tue, 9 Jan 2024 18:49:11 +0000 Subject: [PATCH 06/12] change 'problem' to 'supply-chain security risk' Signed-off-by: Adam Korczynski --- probes/README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/probes/README.md b/probes/README.md index 0b0aa31865b..3fe1075964e 100644 --- a/probes/README.md +++ b/probes/README.md @@ -2,11 +2,11 @@ This directory contains all the Scorecard probes. -A probe is an assessment of a focused, specific problem typically isolated to a particular ecosystem. For example, Scorecards fuzzing check consists of many different probes that assess particular ecosystems or aspects of fuzzing. +A probe is an assessment of a focused, specific supply-chain security risk typically isolated to a particular ecosystem. For example, Scorecards fuzzing check consists of many different probes that assess particular ecosystems or aspects of fuzzing. -Each probe has its own directory in `scorecard/probes`. The probes follow a camelcase naming convention that describe the exact problem a particular probe assesses. +Each probe has its own directory in `scorecard/probes`. The probes follow a camelcase naming convention that describe the exact supply-chain security risk a particular probe assesses. -Probes can return multiple or a single `finding.Outcome`. Probes should be designed in such a way that a `finding.OutcomePositive` reflects a positive result, and `finding.OutcomeNegative` reflects a negative result. Scorecard has other `finding.Outcome` types available for other results; For example, the `finding.OutcomeNotAvailable` is often used for scenarios, where Scorecard cannot assess a project with a given probe. In addition, probes should also be named in such a way that they answer "yes" or "no", and where "yes" answers positively to the problem, and "no" answers negatively. For example, probes that check for SAST tools in the CI are called `toolXXXInstalled` so that `finding.OutcomePositive` reflects that it is positive to use the given tool, and that "yes" reflects what Scorecard considers the positive outcome. For some probes, this can be a bit trickier to do; The `notArchived` probe checks whether a project is archived, however, Scorecard considers archived projects to be negative, and the probe cannot be called `isArchived`. These naming conventions are not hard rules but merely guidelines. Note that probes do not do any formal evaluation such a scoring; This is left to the evaluation part once the outcomes have been produced by the probes. +Probes can return multiple or a single `finding.Outcome`. Probes should be designed in such a way that a `finding.OutcomePositive` reflects a positive result, and `finding.OutcomeNegative` reflects a negative result. Scorecard has other `finding.Outcome` types available for other results; For example, the `finding.OutcomeNotAvailable` is often used for scenarios, where Scorecard cannot assess a project with a given probe. In addition, probes should also be named in such a way that they answer "yes" or "no", and where "yes" answers positively to the supply-chain security risk, and "no" answers negatively. For example, probes that check for SAST tools in the CI are called `toolXXXInstalled` so that `finding.OutcomePositive` reflects that it is positive to use the given tool, and that "yes" reflects what Scorecard considers the positive outcome. For some probes, this can be a bit trickier to do; The `notArchived` probe checks whether a project is archived, however, Scorecard considers archived projects to be negative, and the probe cannot be called `isArchived`. These naming conventions are not hard rules but merely guidelines. Note that probes do not do any formal evaluation such a scoring; This is left to the evaluation part once the outcomes have been produced by the probes. A probe consists of three files: From 868560a8461f75d5f77f4bea734ab9ac60518bb4 Mon Sep 17 00:00:00 2001 From: Adam Korczynski Date: Tue, 9 Jan 2024 21:16:15 +0000 Subject: [PATCH 07/12] Add sentence about what a finding is Signed-off-by: Adam Korczynski --- probes/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/probes/README.md b/probes/README.md index 3fe1075964e..0d50a0b2f7e 100644 --- a/probes/README.md +++ b/probes/README.md @@ -6,7 +6,7 @@ A probe is an assessment of a focused, specific supply-chain security risk typic Each probe has its own directory in `scorecard/probes`. The probes follow a camelcase naming convention that describe the exact supply-chain security risk a particular probe assesses. -Probes can return multiple or a single `finding.Outcome`. Probes should be designed in such a way that a `finding.OutcomePositive` reflects a positive result, and `finding.OutcomeNegative` reflects a negative result. Scorecard has other `finding.Outcome` types available for other results; For example, the `finding.OutcomeNotAvailable` is often used for scenarios, where Scorecard cannot assess a project with a given probe. In addition, probes should also be named in such a way that they answer "yes" or "no", and where "yes" answers positively to the supply-chain security risk, and "no" answers negatively. For example, probes that check for SAST tools in the CI are called `toolXXXInstalled` so that `finding.OutcomePositive` reflects that it is positive to use the given tool, and that "yes" reflects what Scorecard considers the positive outcome. For some probes, this can be a bit trickier to do; The `notArchived` probe checks whether a project is archived, however, Scorecard considers archived projects to be negative, and the probe cannot be called `isArchived`. These naming conventions are not hard rules but merely guidelines. Note that probes do not do any formal evaluation such a scoring; This is left to the evaluation part once the outcomes have been produced by the probes. +Probes can return multiple or a single finding, where a finding is a piece of data with an outcome, message, and optionally a location. Probes should be designed in such a way that a `finding.OutcomePositive` reflects a positive result, and `finding.OutcomeNegative` reflects a negative result. Scorecard has other `finding.Outcome` types available for other results; For example, the `finding.OutcomeNotAvailable` is often used for scenarios, where Scorecard cannot assess a project with a given probe. In addition, probes should also be named in such a way that they answer "yes" or "no", and where "yes" answers positively to the supply-chain security risk, and "no" answers negatively. For example, probes that check for SAST tools in the CI are called `toolXXXInstalled` so that `finding.OutcomePositive` reflects that it is positive to use the given tool, and that "yes" reflects what Scorecard considers the positive outcome. For some probes, this can be a bit trickier to do; The `notArchived` probe checks whether a project is archived, however, Scorecard considers archived projects to be negative, and the probe cannot be called `isArchived`. These naming conventions are not hard rules but merely guidelines. Note that probes do not do any formal evaluation such a scoring; This is left to the evaluation part once the outcomes have been produced by the probes. A probe consists of three files: From b0008c51e3a680bd61b3568be1039b17a4c8bafa Mon Sep 17 00:00:00 2001 From: Adam Korczynski Date: Tue, 9 Jan 2024 21:18:26 +0000 Subject: [PATCH 08/12] remove sentence about running make rule locally Signed-off-by: Adam Korczynski --- CONTRIBUTING.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index e5d414671d1..c5adb51b28d 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -148,8 +148,8 @@ https://docs.pi-hole.net/guides/github/how-to-signoff/ When developing locally, the following commands are useful to run regularly to check unit tests and linting. | Command | Description | Is called in the CI? | -| make unit-test | Runs unit tests only. Good to run often when developing locally. `make all` will also run this. | yes | -| make check-linter | Checks linter issues only. Good to run often when developing locally. `make all` will also run this. | yes | +| make unit-test | Runs unit tests only. `make all` will also run this. | yes | +| make check-linter | Checks linter issues only. `make all` will also run this. | yes | ## Changing Score Results From e9f3dd618bc78b6aac8f76921601a933af6b6f25 Mon Sep 17 00:00:00 2001 From: Adam Korczynski Date: Tue, 9 Jan 2024 21:19:24 +0000 Subject: [PATCH 09/12] change 'supply-chain security risk' to 'heuristic' Signed-off-by: Adam Korczynski --- probes/README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/probes/README.md b/probes/README.md index 0d50a0b2f7e..b121d092762 100644 --- a/probes/README.md +++ b/probes/README.md @@ -2,11 +2,11 @@ This directory contains all the Scorecard probes. -A probe is an assessment of a focused, specific supply-chain security risk typically isolated to a particular ecosystem. For example, Scorecards fuzzing check consists of many different probes that assess particular ecosystems or aspects of fuzzing. +A probe is an assessment of a focused, specific heuristic typically isolated to a particular ecosystem. For example, Scorecards fuzzing check consists of many different probes that assess particular ecosystems or aspects of fuzzing. -Each probe has its own directory in `scorecard/probes`. The probes follow a camelcase naming convention that describe the exact supply-chain security risk a particular probe assesses. +Each probe has its own directory in `scorecard/probes`. The probes follow a camelcase naming convention that describe the exact heuristic a particular probe assesses. -Probes can return multiple or a single finding, where a finding is a piece of data with an outcome, message, and optionally a location. Probes should be designed in such a way that a `finding.OutcomePositive` reflects a positive result, and `finding.OutcomeNegative` reflects a negative result. Scorecard has other `finding.Outcome` types available for other results; For example, the `finding.OutcomeNotAvailable` is often used for scenarios, where Scorecard cannot assess a project with a given probe. In addition, probes should also be named in such a way that they answer "yes" or "no", and where "yes" answers positively to the supply-chain security risk, and "no" answers negatively. For example, probes that check for SAST tools in the CI are called `toolXXXInstalled` so that `finding.OutcomePositive` reflects that it is positive to use the given tool, and that "yes" reflects what Scorecard considers the positive outcome. For some probes, this can be a bit trickier to do; The `notArchived` probe checks whether a project is archived, however, Scorecard considers archived projects to be negative, and the probe cannot be called `isArchived`. These naming conventions are not hard rules but merely guidelines. Note that probes do not do any formal evaluation such a scoring; This is left to the evaluation part once the outcomes have been produced by the probes. +Probes can return multiple or a single finding, where a finding is a piece of data with an outcome, message, and optionally a location. Probes should be designed in such a way that a `finding.OutcomePositive` reflects a positive result, and `finding.OutcomeNegative` reflects a negative result. Scorecard has other `finding.Outcome` types available for other results; For example, the `finding.OutcomeNotAvailable` is often used for scenarios, where Scorecard cannot assess a project with a given probe. In addition, probes should also be named in such a way that they answer "yes" or "no", and where "yes" answers positively to the heuristic, and "no" answers negatively. For example, probes that check for SAST tools in the CI are called `toolXXXInstalled` so that `finding.OutcomePositive` reflects that it is positive to use the given tool, and that "yes" reflects what Scorecard considers the positive outcome. For some probes, this can be a bit trickier to do; The `notArchived` probe checks whether a project is archived, however, Scorecard considers archived projects to be negative, and the probe cannot be called `isArchived`. These naming conventions are not hard rules but merely guidelines. Note that probes do not do any formal evaluation such a scoring; This is left to the evaluation part once the outcomes have been produced by the probes. A probe consists of three files: From 51fe6f8870142c4bbe299637349012d1586d0636 Mon Sep 17 00:00:00 2001 From: Adam Korczynski Date: Sun, 14 Jan 2024 18:25:04 +0000 Subject: [PATCH 10/12] Modify text on where to set remediation data Signed-off-by: Adam Korczynski --- probes/README.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/probes/README.md b/probes/README.md index b121d092762..94b62968426 100644 --- a/probes/README.md +++ b/probes/README.md @@ -45,4 +45,5 @@ f = f.WithRemediationMetadata(map[string]string{ }) ``` -To display the content to the user, Scorecard needs to print out the findings metadata somewhere in the evaluation part. This can be done with logging. \ No newline at end of file +### Should the changes be in the probe or the evaluation? +The remediation data must be set in the probe. \ No newline at end of file From 52b320e75c6b3d1c5223e0ff6aac1ad24f48c499 Mon Sep 17 00:00:00 2001 From: Adam Korczynski Date: Sun, 14 Jan 2024 18:30:07 +0000 Subject: [PATCH 11/12] Add example Signed-off-by: Adam Korczynski --- probes/README.md | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/probes/README.md b/probes/README.md index 94b62968426..0c83148409f 100644 --- a/probes/README.md +++ b/probes/README.md @@ -45,5 +45,26 @@ f = f.WithRemediationMetadata(map[string]string{ }) ``` +### Example +Consider a probe with following line in its `def.yml`: +``` +The project ${{ metadata.oss-fuzz-integration-status }} integrated into OSS-Fuzz. +``` + +and the probe sets the following metadata: +```golang +f, err := finding.NewWith(fs, Probe, + "Message", nil, + finding.OutcomePositive) +f = f.WithRemediationMetadata(map[string]string{ + "oss-fuzz-integration-status": "is", +}) +``` + +The probe will then output the following text: +``` +The project is integrated into OSS-Fuzz. +``` + ### Should the changes be in the probe or the evaluation? The remediation data must be set in the probe. \ No newline at end of file From 9ca2e14078534c8b15ad1f99e76ff58bb67f820e Mon Sep 17 00:00:00 2001 From: Adam Korczynski Date: Tue, 23 Jan 2024 11:55:13 +0000 Subject: [PATCH 12/12] add line about discussing changes to the score in a GitHub issue Signed-off-by: Adam Korczynski --- CONTRIBUTING.md | 1 + 1 file changed, 1 insertion(+) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index c5adb51b28d..7953aa09fe3 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -154,6 +154,7 @@ When developing locally, the following commands are useful to run regularly to c ## Changing Score Results As a general rule of thumb, pull requests that change Scorecard score results will need a good reason to do so to get merged. +It is a good idea to discuss such changes in a GitHub issue before implementing them. ## Linting