-
Notifications
You must be signed in to change notification settings - Fork 276
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to get 100% stability for a target in under 5 minutes (TM) #677
Comments
Thank you for this excellent guide. Two years ago, an intern on our team added a feature in libFuzzer to automatically ignore unstable edges. But we couldn't find any noticeable improvement that resulted from this on ClusterFuzz (probably because looking for real improvements on CF is super hard) so we got rid of this feature. Since then, I don't get so worried about instability, and I never really had any cases where I thought it really burnt me. Is instability that terrible? Quite a few targets in OSS-Fuzz/ClusterFuzz land are unstable and they seem to find reproducible bugs anyway. I sometimes wonder if this is an issue we take too seriously because otherwise AFL will display evil red text at you. When implementing this feature, I did some investigation of unstable targets. One thing interesting that I noticed is a lot of instability is caused by initialization. For example if my target calls this code:
The first time An easy "fix" for that problem is to set persistent executions to 1, this might be worth trying but I haven't done so because I am worried about the performance impact. With respect to the aflplusplus_same* experiments, if there are significant differences between the same* fuzzers on any benchmarks, even the unstable ones, I think we need to fix something in FuzzBench (though maybe the fix will involve changing the benchmark). Off-topic for this conversation, but I think if there are significant differences between the exact same fuzzer even without the same random seed, we need to fix something in FuzzBench. |
I am starting with this:
it doesnt and we would not. I am also not advocating 100% stability as a must for realworld fuzzing. you want coverage in the target. for real world fuzzing a 100% stable target that covers all edges is the best. a 90% stable target that covers all edges is however better than a 100% stable target that ignores 10% of the edges. with instability you basically have a partial coverage loss on an edge, with ignore you have a full loss on that edge. Why do I still want 100% (or a high number) stability? For my tests where it is about speed comparisons for example (so no functional change) I need a 100% deterministic target otherwise I have too much entropy in there to be measuring something useful. the system the fuzzer runs on is noisy already. And there are functions that are unstable, but also provide value to coverage. init functions for with data that comes from input for example. and to be able to make this decision - that is where the denylist.txt comes in. you look at the text file, leave in what is uninteresting and remove what should be part of the coverage. EDIT: some phrases were nonsense and stated the opposite of what I meant :) |
ACK. I think we are in agreement.
Not directly relevant to this stability conversation but you might want to wait for #648 it will probably land in september since I will be busy at the end of august but it hopefully will allow reporting of speed and CPU time.
Makes sense. |
select a fuzzbench target
export TARGET=...
checkout fuzzbench, make two edits and make the target:
the build function should look like this afterwards:
then:
before the compilation of the fuzzer add this line:
Note: some targets compile the fuzzer in the overall make. in this case after the make
rm
the fuzzer, set the env and again put amake
commandThen make the target:
For non-PCGUARD afl++:
AFL_LLVM_DENYLIST=/path/to/denylist.txt
For PCGUARD and any sancov compilers (needs llvm 12):
CFLAGS=-fsanitize-coverage-blocklist=/path/to/denylist.txt
for ${TARGET} == freetype2-2017 this is:
The text was updated successfully, but these errors were encountered: