-
Notifications
You must be signed in to change notification settings - Fork 367
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Shrinking multicore #1249
Comments
It seems it was already reported here: #1105.. by you 😄 |
Haha yeah I remember that issue, but this one is different.
For example, suppose there was a |
Shrinking with multiple cores is hard with our current (stochastic) approach, since the number of messages required between the core is likely to be high, and that will cause an overhead that will offset the gains for multiple workers. If you can @aviggiano, please test #1250 to see if that improved the shrinking speed on a single worker. |
We are actually shrinking on multiple cores, but the synchronization is bad. In #1280, I locked shrinking so that tests are not shared between workers. I think with proper synchronization, it will be possible to share test shrinking between workers, but this requires some code reorganization. |
Describe the desired feature
Currently, it seems like Echidna uses a single core/process/thread to shrink failed sequences.
In some cases, however, we're interested in using 100% of the machine's resources to extract the results of that particular sequence. For example, this is often the case when I am using
stopOnFail: true
andworkers: N
in a multicore setup. I don't care about other failed properties, I only care about that particular one that I know to have failed. The problem is that this takes forever even if I bump up N or the number of cores.It seems like trying to shrink a sequence on a
c5.large
instance takes about the same amount of time as on ac5.4xlarge
(benchmark pending), which is unexpected.The text was updated successfully, but these errors were encountered: