Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LibAFL Saturation Experiment. #1984

Closed
wants to merge 4 commits into from

Conversation

tokatoka
Copy link
Contributor

Hi @DonggeLiu
This is the longer fuzzer experiment that I was talking about last month.
For now can we check if this fuzzer stands the 24 hours run?

@tokatoka
Copy link
Contributor Author

The command is

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-12-libafl --fuzzers libafl_saturation

@DonggeLiu
Copy link
Contributor

Hi @DonggeLiu This is the longer fuzzer experiment that I was talking about last month. For now can we check if this fuzzer stands the 24 hours run?

Sure! It's actually 23 hours : )

Experiment 2024-05-12-libafl data and results will be available later at:
The experiment data.
The experiment report.
The experiment report(experimental).

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-12-libafl --fuzzers libafl_saturation

@tokatoka
Copy link
Contributor Author

looks like it is not built. was something wrong?

@tokatoka
Copy link
Contributor Author

nevermind 😅
i think i just forgot to refresh the webpage before checking the result.

@tokatoka
Copy link
Contributor Author

Hello @DonggeLiu
I checked the log. I think the run was successful. Can we ask a 48 hour run that we discussed last month?

@DonggeLiu
Copy link
Contributor

DonggeLiu commented May 17, 2024

Hello @DonggeLiu I checked the log. I think the run was successful. Can we ask a 48 hour run that we discussed last month?

Sure!
Would you mind modify the experiment-config.yaml as discussed?
Change this to 2 days: https://github.com/google/fuzzbench/blob/master/service/experiment-config.yaml#L6
Change this to false: https://github.com/google/fuzzbench/blob/master/service/experiment-config.yaml#L14

@jonathanmetzman please let us know if I missed anything.
E.g., Shall we run a separate 48-hour exp for base fuzzers beforehand?
I reckon we only have their 24-hour results.

@tokatoka
Copy link
Contributor Author

done

E.g., Shall we run a separate 48-hour exp for base fuzzers beforehand?

yeah i'm interested to see that too :)

@DonggeLiu
Copy link
Contributor

done

E.g., Shall we run a separate 48-hour exp for base fuzzers beforehand?

yeah i'm interested to see that too :)

Sorry that this took so long, @jonathanmetzman and I were extremely busy last week.
We will start this tmr (if not today).

BTW, may I know which baseline fuzzers you are interested in comparing against?
Here are the options, but I presume not all of them are useful? (e.g., some were not updated in years.)

@tokatoka
Copy link
Contributor Author

i'd like to see,

  • afl
  • aflfast
  • aflplusplus
  • centipede
  • libafl
  • libfuzzer

please!

@DonggeLiu

This comment was marked as outdated.

@DonggeLiu
Copy link
Contributor

DonggeLiu commented May 22, 2024

Experiment 2024-05-22-bases data and results will be available later at:
The experiment data.
The experiment report.
The experiment report(experimental).

@DonggeLiu
Copy link
Contributor

Hi @tokatoka, while we are waiting for the base fuzzers experiment, would you like to run yours in parallel?
This can save some waiting time (particularly if some benchmarks fail), but it requires you to manually combine the two results together when both are ready. In addition, the report won't include the Unique code coverage plots section under each benchmark.

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-22-bases --fuzzers afl aflfast aflplusplus centipede libafl libfuzzer

@tokatoka
Copy link
Contributor Author

tokatoka commented May 22, 2024

Hi @tokatoka, while we are waiting for the base fuzzers experiment, would you like to run yours in parallel?
This can save some waiting time (particularly if some benchmarks fail), but it requires you to manually combine the two results together when both are ready. In addition, the report won't include the Unique code coverage plots section under each benchmark.

For me I can wait, and it's better for me to see the combined results

@tokatoka
Copy link
Contributor Author

It seems they are stuck after 35 hours..?

@tokatoka
Copy link
Contributor Author

but well it's fine..
can we start the experiment for our fuzzer too? @DonggeLiu

@DonggeLiu
Copy link
Contributor

DonggeLiu commented May 28, 2024

It seems they are stuck after 35 hours..?

Sorry I was traveling this week and did not check emails frequently.
@jonathanmetzman could you please have a look at this? It appears to be stuck at ~35-hour.

can we start the experiment for our fuzzer too? @DonggeLiu

We might have to understand why it stuck first.

@DonggeLiu
Copy link
Contributor

I can see a lot of errors related to requesting metadata, maybe they are related?

network error when requesting metadata

@tokatoka
Copy link
Contributor Author

Is there anything that I can help? 😃

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-04-bases --fuzzers afl

@DonggeLiu
Copy link
Contributor

@jonathanmetzman, gentle ping : )

I suspect that this is the measurement bottleneck again, probably due to the experiment doubles the time?
Let me restart the experiment with afl only.
If that works, I will restart the experiments with one fuzzer for each.

Experiment 2024-06-04-bases data and results will be available later at:
The experiment data.
The experiment report.
The experiment report(experimental).


For me to copy and paste later:

gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-22-bases --fuzzers aflfast 
gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-22-bases --fuzzers aflplusplus 
gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-22-bases --fuzzers centipede 
gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-22-bases --fuzzers libafl 
gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-22-bases --fuzzers libfuzzer

@DonggeLiu
Copy link
Contributor

Hi @tokatoka Thanks for the waiting.
The report above confirms that the previous failure is caused by measurement bottleneck, I will only run one fuzzer per exp below.
Once they finish, I will run yours in another exp.

We can merge the statistics later manually, I don't think we can get unique coverage for each fuzzer this way, but it should be able to give us overall coverage info as usual.
Hope that's OK.

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-07-bases-aflfast --fuzzers aflfast

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-07-bases-aflpp --fuzzers aflplusplus

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-22-bases-centipede --fuzzers centipede

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-07-bases-libaf --fuzzers libafl

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-07-bases-libfuzzer --fuzzers libfuzzer

@DonggeLiu
Copy link
Contributor

Experiment 2024-06-07-bases-aflfast data and results will be available later at:
The experiment data.
The experiment report.


Experiment 2024-06-07-bases-aflpp data and results will be available later at:
The experiment data.
The experiment report.


Experiment 2024-05-22-bases-centipede data and results will be available later at:
The experiment data.
The experiment report.


Experiment 2024-06-07-bases-libaf data and results will be available later at:
The experiment data.
The experiment report.


Experiment 2024-06-07-bases-libfuzzer data and results will be available later at:
The experiment data.
The experiment report.

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-07-bases-libfuzzer --fuzzers libfuzzer

@tokatoka
Copy link
Contributor Author

@DonggeLiu
Thank you.
They seem to be working. Next can you run my fuzzer?

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-12-libafl --fuzzers libafl_saturation

Sure, I presume this is the right fuzzer?

gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-11-libafl-sat --fuzzers libafl_saturation

@DonggeLiu
Copy link
Contributor

BTW, I noticed that the afl++ exp failed, likely due to measurement issues again.
Is that an important benchmark for you? I can further split the experiment by benchmarks if that's necessary.

@tokatoka
Copy link
Contributor Author

Sure, I presume this is the right fuzzer?

Yes that's it.

BTW, I noticed that the afl++ exp failed, likely due to measurement issues again.
Is that an important benchmark for you? I can further split the experiment by benchmarks if that's necessary.

No that is not important so i don't need extra experiment for aflpp

Thank you @DonggeLiu

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-13-libafl-sat --fuzzers libafl_saturation

@DonggeLiu
Copy link
Contributor

Thanks for the confirmation and waiting, @tokatoka.
Experiment 2024-06-13-libafl-sat data and results will be available later at:
The experiment data.
The experiment report.
The experiment report(experimental).

@tokatoka
Copy link
Contributor Author

Thank you 👍
We can close this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants