Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fixes ClusterFuzz issues 67399 and 55299
Issue 67399: gitpython: Fuzzing build failure
Since: 2024-03-11
The Problem
The pre-installed version of
pip
(19.2.3) was outdated and unable to parse thepyproject.toml
syntax during the install step inbuild.sh
causing the script to error out and crash.The Solution
Upgrading
pip
to the latest version in the project image resolves the issue and allows the installation to complete.Issue 55299: gitpython: Coverage build failure
Since: 2023-01-21
The Problem
(my hypothesis at least)
I believe the root of the issue here was caused by fuzzer initialization and execution taking too long for the actual run to generate a meaningful corpus. I suspect this because:
atheris.instrament_all()
to instrument 4,000+ functions before the fuzzer execution could begin which was causing a significant delay (on my local machine, at least) before actual test execution would start.The Solution(-ish)
The commit message on 908ba9c should sum it up, but the TL;DR is I reduced the scope of instrumented functions to align closer with the APIs being fuzzed and added dictionaries and seed corpra which provided promising results locally.
fuzz_tree.py
is still slow as far as average_exec_per_sec, but startup is quicker and with the seed data it gets close to its coverage depth fairly quickly as well.