You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This issue tracks general fuzzer improvements that improve both property tests and invariant tests. (There will be a separate tracking issue for things that are strictly invariant test related).
Items are roughly ordered by my opinion of priority, both at the header-level and the bullet-level. I say "roughly ordered" because some items are much bigger scope than others so having a strict ordering doesn't make much sense.
Fuzzer Benchmark/Tests
First priority is some tests and benchmarks. to verify behavior of the fuzzer has not regressed as it's changed. Potential test cases:
max_test_reject_rate: set a maximum test rejection rate per test function #4091 — This flag would conflict with the existing max_test_rejects flag, so one needs to take precedence and we need warn if the both exist. Personally this feels like the complexity/config growth isn't worth it since the same functionality can already be accomplished, so may be worth closing
Failure Persistence and Replay
I have this last because the functionality can be replicated a bit tediously by copying failed tests into concrete tests.
Closing in favor of the active #8076 fuzzer meta ticket
Made sure that all unresolved tickets have been incorporated (this was already the case) with the exception of #4090, this was moved to the cheatcodes meta: #4439
Component
Forge
Describe the feature you would like
This issue tracks general fuzzer improvements that improve both property tests and invariant tests. (There will be a separate tracking issue for things that are strictly invariant test related).
Items are roughly ordered by my opinion of priority, both at the header-level and the bullet-level. I say "roughly ordered" because some items are much bigger scope than others so having a strict ordering doesn't make much sense.
Fuzzer Benchmark/Tests
First priority is some tests and benchmarks. to verify behavior of the fuzzer has not regressed as it's changed. Potential test cases:
This is tracked in a separate issue here: #3411
Refactor / Tech Debt / Cleanup
fuzz.max_global_rejects
config #3153Features / Performance
High Priority
Low priority:
vm.canRevert()
: allow tests to revert in expected cases #4090vm.writeLine
, so this isn't necessary IMO. May be worth closingmax_test_reject_rate
: set a maximum test rejection rate per test function #4091 — This flag would conflict with the existingmax_test_rejects
flag, so one needs to take precedence and we need warn if the both exist. Personally this feels like the complexity/config growth isn't worth it since the same functionality can already be accomplished, so may be worth closingFailure Persistence and Replay
I have this last because the functionality can be replicated a bit tediously by copying failed tests into concrete tests.
Long Term Fuzz Techniques
Some more advanced techniques to consider down the road:
Additional context
No response
The text was updated successfully, but these errors were encountered: