You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
The granularity of the fuzzing check lacks some important details when assessing a project. In particular, I lack a check for “Is this project fuzzed continuously”.
Describe the solution you'd like
The current fuzzing check includes the probes:
The first group of probes are related to “continuity” of the fuzzing whereas Group 2 is related to the presence of fuzzing source code.
I’d like to distinguish the two groups in two different checks to add a check that identifies if a project's fuzzing is managed by known continuous fuzzing solutions.
Group 1 probes: is a question about whether fuzzers are run regularly and, perhaps, in a public way so fuzzing results can be monitored. For example, there are cases where fuzzers exist in a repository but the fuzzers are run on an ad-hoc basis (including, very rarely) whereas the continuous fuzzing workflow gives a guarantee for when fuzzers run. The difference between continuous fuzzing versus non-continuous fuzzing is important and can be a defining aspect as to whether vulnerabilities are found or not [1]. For consumers this is important as the security is arguably in a higher state when the fuzzers are run continuously as opposed to an ad-hoc manner.
This is an actionable item, in that if a project fails this check there is a concrete task to solve as a maintainer, since e.g. ClusterfuzzLite can be migrated by all (subject to language being supported). We can also include additional checks for other ways of running checks in the CI e.g. go test -fuzz.
Group 2 probes: related to whether source code for fuzzers exist. Splitting the checks in two may open up for a more refined understanding of this, e.g. measuring the number of fuzzers relative to the size of the codebase. If there is 1 fuzzer present for a codebase with 100K lines then it’s likely less secure than a codebase with 100 fuzzers and 100K lines.
Describe alternatives you've considered
Add a more refined scoring on the fuzzing. Such as, if you have a source code for various fuzzers then you get X and if you continuous fuzzing present you get Y where X + Y = 10 and X>0 and Y>0.
Is your feature request related to a problem? Please describe.
The granularity of the fuzzing check lacks some important details when assessing a project. In particular, I lack a check for “Is this project fuzzed continuously”.
Describe the solution you'd like
The current fuzzing check includes the probes:
Including the suggested new probes from #3473 then the list gets the additional probes:
We can make a grouping of these probes as follows:
Group 1:
Group 2:
The first group of probes are related to “continuity” of the fuzzing whereas Group 2 is related to the presence of fuzzing source code.
I’d like to distinguish the two groups in two different checks to add a check that identifies if a project's fuzzing is managed by known continuous fuzzing solutions.
Group 1 probes: is a question about whether fuzzers are run regularly and, perhaps, in a public way so fuzzing results can be monitored. For example, there are cases where fuzzers exist in a repository but the fuzzers are run on an ad-hoc basis (including, very rarely) whereas the continuous fuzzing workflow gives a guarantee for when fuzzers run. The difference between continuous fuzzing versus non-continuous fuzzing is important and can be a defining aspect as to whether vulnerabilities are found or not [1]. For consumers this is important as the security is arguably in a higher state when the fuzzers are run continuously as opposed to an ad-hoc manner.
This is an actionable item, in that if a project fails this check there is a concrete task to solve as a maintainer, since e.g. ClusterfuzzLite can be migrated by all (subject to language being supported). We can also include additional checks for other ways of running checks in the CI e.g.
go test -fuzz
.Group 2 probes: related to whether source code for fuzzers exist. Splitting the checks in two may open up for a more refined understanding of this, e.g. measuring the number of fuzzers relative to the size of the codebase. If there is 1 fuzzer present for a codebase with 100K lines then it’s likely less secure than a codebase with 100 fuzzers and 100K lines.
Describe alternatives you've considered
Add a more refined scoring on the fuzzing. Such as, if you have a source code for various fuzzers then you get X and if you continuous fuzzing present you get Y where X + Y = 10 and X>0 and Y>0.
[1] The importance of continuity in fuzzing - CVE-2020-28362
The text was updated successfully, but these errors were encountered: