Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Experiment request for custom benchmarks #2026

Closed

Conversation

ardier
Copy link

@ardier ardier commented Aug 14, 2024

Description

Add mutant-based benchmarks and update experiment data in the YAML file.
This experiment only introduces new benchmarks as we want to address the saturated seed corpus problem through corpus reduction techniques.

We have decided to use AFL and AFL++ for this experiment to observe any difference in the outcomes due to the difference in these fuzzers.

We use four benchmarks:

  1. The original seed corpus from the lcms_cms_transform_fuzzer benchmark
  2. An unfiltered seed corpus from our saturated corpus
  3. Filtering strategy one applied to the seed corpus
  4. Filtering strategy two applied to the seed corpus

Copy link

google-cla bot commented Aug 14, 2024

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

@ardier ardier force-pushed the minimized-subsumed-mutants-benchmark branch from 963ac66 to 25cbd4a Compare August 14, 2024 13:49
@ardier
Copy link
Author

ardier commented Aug 15, 2024

Updated the Description.

@ardier ardier marked this pull request as draft August 22, 2024 14:43
@ardier ardier marked this pull request as ready for review August 22, 2024 14:43
@ardier
Copy link
Author

ardier commented Aug 22, 2024

@DonggeLiu @jonathanmetzman Could you please have a look?

@DonggeLiu
Copy link
Contributor

Hi @ardier, we are happy to run experiments for you, but could you please:

  1. Move the seeds directory in this PR to a cloud storage (e.g., GitHub repo), and download it in the Dockerfile? Otherwise the 'Files changed' tab becomes too slow or crashes.
  2. Would you mind making a trivial modification to service/gcbrun_experiment.py?
    This will allow me to launch experiments in this PR before merging. Here is an example to add a dummy comment : )
  3. In addition, could you please write your experiment request in this format?
    You can swap the --experiment-name, --fuzzers, --benchmarks parameters with your values:
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name <YYYY-MM-DD-NAME>  --fuzzers <FUZZERS> --benchmarks <BENCHMARKS>

We would really appreciate that.

@ardier
Copy link
Author

ardier commented Sep 3, 2024

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-09-03-afl-mutants --fuzzers afl aflplusplus --benchmarks lcms_cms_transform_fuzzer lcms_cms_transform_fuzzer_all_seeds lcms_cms_transform_fuzzer_minimized_mutants lcms_cms_transform_fuzzer_dominator_mutants

host seeds elsewhere

modified files needed to run the experiment

fixed date
@ardier ardier force-pushed the minimized-subsumed-mutants-benchmark branch from 25cbd4a to 52374a0 Compare September 3, 2024 09:27
@ardier
Copy link
Author

ardier commented Sep 3, 2024

@DonggeLiu, apologies that this took a while for me to get to. I have applied the changes you asked for. Please let me know if I should be taking any additional steps.

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-09-03-afl-mutants --fuzzers afl aflplusplus --benchmarks lcms_cms_transform_fuzzer lcms_cms_transform_fuzzer_all_seeds lcms_cms_transform_fuzzer_minimized_mutants lcms_cms_transform_fuzzer_dominator_mutants

@ardier
Copy link
Author

ardier commented Sep 10, 2024

Hello. I don't see the results of this experiment anywhere. Am I missing something, or do I need to take other steps to generate the reports?

@DonggeLiu
Copy link
Contributor

Sorry @ardier , it appears cloud build failed to pick up the previous experiment request command:
image

Let me retry this.

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-09-11-afl-mutants --fuzzers afl aflplusplus --benchmarks lcms_cms_transform_fuzzer lcms_cms_transform_fuzzer_all_seeds lcms_cms_transform_fuzzer_minimized_mutants lcms_cms_transform_fuzzer_dominator_mutants

@ardier
Copy link
Author

ardier commented Sep 11, 2024

No problem. Thank you for looking into this.

@DonggeLiu
Copy link
Contributor

Hi @ardier, the experiment request failed again for the same reason, and there is no further log from the cloud logs.

Let's do it again, and I will spend time debugging it if it fails again.

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-09-12-afl-mutants --fuzzers afl aflplusplus --benchmarks lcms_cms_transform_fuzzer lcms_cms_transform_fuzzer_all_seeds lcms_cms_transform_fuzzer_minimized_mutants lcms_cms_transform_fuzzer_dominator_mutants

@DonggeLiu
Copy link
Contributor

Experiment 2024-09-12-afl-mutants data and results will be available later at:
The experiment data.
The experiment report.
The experiment report(experimental).

@DonggeLiu
Copy link
Contributor

DonggeLiu commented Sep 12, 2024

A quick update on this:

  1. The experiment launched successfully this time (finally).
  2. No report generated because of a known issue with llvm-profdata coverage measurement. It failed to measure the coverage hence cannot generate report.
  3. From the cloud log, so far the issue is from benchmark lcms_cms_transform_fuzzer_dominator_mutants only.
  4. I will rerun the exp without that benchmark.
  5. Unfortunately the error happened on all benchmarks, I reckon that's because they are all based on lcms.
  6. Would you mind using other benchmarks? If not I can run some other benchmarks and let you know which one works.
  7. We will look into ways to fix this, but currently I am fully occupied by other tasks and may take weeks before I can go back to this.

@ardier ardier closed this Nov 26, 2024
@ardier ardier deleted the minimized-subsumed-mutants-benchmark branch November 26, 2024 18:10
@ardier
Copy link
Author

ardier commented Nov 26, 2024

I created a simpler experiment here with the suggested changes to ensure everything works before creating a larger experiment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants