You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now, the coverage maps and corpuses are global and shared between workers. Due to the random nature of fuzzing, the first sequence to cover a PC may have less "potential" than another about to be discovered by another worker (especially at startup). That is, storing exclusively the first worker's sequence may preclude or reduce the possibility of reaching discovering of some valuable input through mutation. We could experiment with a per-worker corpus and have a go routine synchronize the corpuses between workers at some interval, the sync rate. If this is merged, it may be desirable to also have some worker dedicated to shrinking the corpus; however, given we probably only have on the order of hundreds of entries in the corpus normally, X, and a per-worker corpus would result in a corpus size of # of workers * X, we'd want to track how many times corpus entries are mutated and optimize the corpus size such that it's not too frequent but not infrequent either.
We would probably need to have a better memory management strategy like flushing the in-memory corpus intermittently or when it rises above some threshold. It'd require more investigation as the worker reset (workerResetLimit) is supposed to limit memory growth, but maybe there's a better option
The text was updated successfully, but these errors were encountered:
Right now, the coverage maps and corpuses are global and shared between workers. Due to the random nature of fuzzing, the first sequence to cover a PC may have less "potential" than another about to be discovered by another worker (especially at startup). That is, storing exclusively the first worker's sequence may preclude or reduce the possibility of reaching discovering of some valuable input through mutation. We could experiment with a per-worker corpus and have a go routine synchronize the corpuses between workers at some interval, the sync rate. If this is merged, it may be desirable to also have some worker dedicated to shrinking the corpus; however, given we probably only have on the order of hundreds of entries in the corpus normally, X, and a per-worker corpus would result in a corpus size of
# of workers * X
, we'd want to track how many times corpus entries are mutated and optimize the corpus size such that it's not too frequent but not infrequent either.We would probably need to have a better memory management strategy like flushing the in-memory corpus intermittently or when it rises above some threshold. It'd require more investigation as the worker reset (
workerResetLimit
) is supposed to limit memory growth, but maybe there's a better optionThe text was updated successfully, but these errors were encountered: