-
-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmark solving systems of equations #67
Conversation
Can you paste the timings for these? We should try to keep each benchmark under a second at least. |
These benchmarks are for things that are made much faster by sympy/sympy#18844. With that PR they are fast (or will be when it is finished) so I guess I'm adding them as aspirational benchmarks. |
If things are too slow for previous versions, I think that it can should be blacklisted for previous versions and only test for any regressions after. |
I guess that the benchmarks need to work on current master when they get merged. I've scaled them back a bit. These are the new timings:
|
The reason some of these examples fail with larger benchmarks is because of hidden quadratic behaviour that only comes into play with larger systems of equations. My PR eliminates those but it's hard to see that effect if you only look at smaller systems. |
Is there a mechanism for blacklisting benchmarks? |
https://asv.readthedocs.io/en/stable/writing_benchmarks.html#setup-and-teardown-functions |
That's for make the timings faster. LGTM. |
I think we should have a range of time lengths. Many benchmarks naturally have a parameter that makes them take longer so this isn't difficult. The problem is that if a benchmark is too fast, then it can end up timing something else, like the overhead of the core or some unrelated helper functions. It can also be harder to detect speedups or slowdowns. For instance, if a function has 500 ms of overhead and spends 500 ms on the actual algorithm, then if the algorithm becomes 50% faster, it will only show up as 25% (500 + 250 = 750 ms). But if you instead take a version that takes 4 sec on the main algorithm you will see something much closer to 50% (4.5 vs. 2.5 ms.). But I also sympathize with the fact that this makes the benchmarks much slower to run, so there needs to be a balance, and a good way to run such things optionally (I don't know if there is anything like that right now). Plus sometimes you do want to detect slowdowns in overhead stuff as well, so having both types of benchmarks is helpful. |
I'll merge this now. Once the behaviour on sympy master is actually improved I'll scale up the benchmarks and add blacklisting. |
Adds benchmarks for solve and linsolve for a few cases that came up while working on sympy/sympy#18844: