-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Analyse C-level coverage of pyperformance suite #421
Comments
Those are some interesting results! I'm surprised
Looks like we could benefit from some benchmarks in those areas. |
Thanks for looking at these. For async, there are some uses of this in the Pyston benchmarks we are trying to port to Python 3.11 so maybe those will soon be covered (though I haven't actually run gcov on it to confirm). For pattern matching, that's a newer feature so hasn't been as critical to performance thus far, but I agree we should add coverage going forward. Part of my motivation for doing this was the recently discovered regression in tracing performance -- what other usual-yet-critical features might we be missing. I haven't found any other red flags yet, but I also don't have a good intuition about the code base yet either. |
I recently wrote a small benchmark as part of writing Cython's implementation of pattern matching (https://github.com/da-woods/cython/blob/match-benchmark/Demos/benchmarks/patma.py) It's pretty much the definition of "artificial microbenchmark", although it does at least cover some useful cases: exact dict vs inexact-dict mapping, unpacking large sequences into a star pattern. I suspect it's too simple for what you'd want, but I'm mentioning it on the off-change that it's useful. |
Sorry, late to the discussion. I've been farming some pattern matching microbenchmarks for a while now over at https://github.com/brandtbucher/patmaperformance. All it's really missing is coverage of mapping patterns. I'm trying to avoid using totally artificial code, while still keeping the time mostly spent just doing pattern-matching stuff. |
I'm marking this as done for now. If we get a lot of value out of this, we may want to automate publishing these results, but I'm going to consider that a different, follow-on work item. |
Results from gcov
We should look at this to see if there are any potential hot spots that aren't currently covered by the performance suite.
The text was updated successfully, but these errors were encountered: