Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cmd/go: support corpus minimization #49290

Open
katiehockman opened this issue Nov 2, 2021 · 5 comments
Open

cmd/go: support corpus minimization #49290

katiehockman opened this issue Nov 2, 2021 · 5 comments
Labels
fuzz Issues related to native fuzzing support NeedsFix The path to resolution is known, but the work has not been done.
Milestone

Comments

@katiehockman
Copy link
Contributor

katiehockman commented Nov 2, 2021

We should support corpus minimization, which will mean the ability to remove items from the on-disk corpus which either 1) don't have the types that are supported by the fuzz test (e.g. they are left over from a previous version of the test which took different params), or 2) don't expand any new coverage that isn't already provided by other entries in the corpus.

(1) should be pretty straightforward. We can just unmarshal the contents of the file, and see if it matches. If it doesn't, delete it.

(2) will be a bit more involved, but not necessarily that complicated. We should take a look at how libFuzzer implements this. One potential solution would be to maintain a coverage map, and run each corpus item against the fuzz test in turn, updating the map is it runs. If any of them don't expand coverage, delete it. A potential pitfall of this: if we have 20 corpus entries that each expand 1 line, and 1 corpus entry that covers all 20 of those lines at once, which do we choose?

At least (2) will be needed for OSS-Fuzz integration, if they end up supporting native support.

@katiehockman katiehockman added NeedsFix The path to resolution is known, but the work has not been done. fuzz Issues related to native fuzzing support labels Nov 2, 2021
@katiehockman katiehockman added this to the Go1.19 milestone Nov 2, 2021
@bcmills
Copy link
Contributor

bcmills commented Nov 2, 2021

A thought regarding (2): if we have an effective minimization algorithm, is it actually necessary to prune the corpus on disk? It probably is worthwhile to prune the inputs stored in the Go build cache, but arguably the former-crashers in the testdata/fuzz directory should be retained as regression tests.

Perhaps we could instead start each fuzzing run by spending some fraction of the time budget re-minimizing the corpus. That would also avoid losing coverage if (for example) the code is modified in a way that significantly changes coverage before refactoring back to something closer to a previous approach (for which the pruned-out inputs might actually become relevant again).

@katiehockman
Copy link
Contributor Author

It probably is worthwhile to prune the inputs stored in the Go build cache, but arguably the former-crashers in the testdata/fuzz directory should be retained as regression tests.

Yes agreed. When I talk about "corpus minimization" here, I'm only referring to the corpus in the build cache. We shouldn't touch the seed corpus.

Perhaps we could instead start each fuzzing run by spending some fraction of the time budget re-minimizing the corpus.

We could do this, but I would argue it should be opt in. e.g. go test -fuzz=Fuzz -fuzzminimizecorpus
Otherwise, I'm imagining a scenario where someone has a fuzz target that accepts []byte. In playing around with their test, they change the target to accept []byte, int. If we prune the corpus by default, then when they run go test -fuzz Fuzz, all of the cache corpus that only accepts []byte is going to be deleted before the user realizes what happened.

@katiehockman
Copy link
Contributor Author

@dgryski pointed me to https://arxiv.org/pdf/1905.13055.pdf. Commenting here so we can find it later.

@ianlancetaylor
Copy link
Member

Moving to Backlog.

@morehouse
Copy link

Bump.

Because corpus minimization/merging features are missing, it is difficult for major Go projects to maintain public corpora. PRs adding new inputs to the corpus can't be easily evaluated to determine which seeds actually increase coverage, and we essentially have to accept all new inputs as a package based on coverprofiles before/after the PR. Over time, this can lead to major bloat of the corpora.

Those serious about fuzzing Go end up resorting to libFuzzer build mode and doing hacky things to measure code coverage.

It would be so much nicer if we could just use the native fuzzing tools.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
fuzz Issues related to native fuzzing support NeedsFix The path to resolution is known, but the work has not been done.
Projects
Status: No status
Status: No status
Development

No branches or pull requests

4 participants