Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Analyse C-level coverage of pyperformance suite #421

Closed
mdboom opened this issue Jun 27, 2022 · 5 comments
Closed

Analyse C-level coverage of pyperformance suite #421

mdboom opened this issue Jun 27, 2022 · 5 comments
Assignees

Comments

@mdboom
Copy link
Contributor

mdboom commented Jun 27, 2022

Results from gcov

We should look at this to see if there are any potential hot spots that aren't currently covered by the performance suite.

@mdboom mdboom self-assigned this Jun 27, 2022
@Fidget-Spinner
Copy link
Collaborator

Fidget-Spinner commented Jun 28, 2022

Those are some interesting results! I'm surprised ceval.c's coverage is so low. Just eyeballing the code, some hot spots we don't seem to cover in pyperformance:

  • Async operations
  • Pattern matching

Looks like we could benefit from some benchmarks in those areas.
The rest of the cold paths in ceval.c look like error handling cases which IMO are usually fine to ignore.

@mdboom
Copy link
Contributor Author

mdboom commented Jun 28, 2022

Thanks for looking at these.

For async, there are some uses of this in the Pyston benchmarks we are trying to port to Python 3.11 so maybe those will soon be covered (though I haven't actually run gcov on it to confirm).

For pattern matching, that's a newer feature so hasn't been as critical to performance thus far, but I agree we should add coverage going forward.

Part of my motivation for doing this was the recently discovered regression in tracing performance -- what other usual-yet-critical features might we be missing. I haven't found any other red flags yet, but I also don't have a good intuition about the code base yet either.

@da-woods
Copy link

da-woods commented Jun 29, 2022

I recently wrote a small benchmark as part of writing Cython's implementation of pattern matching (https://github.com/da-woods/cython/blob/match-benchmark/Demos/benchmarks/patma.py)

It's pretty much the definition of "artificial microbenchmark", although it does at least cover some useful cases: exact dict vs inexact-dict mapping, unpacking large sequences into a star pattern. I suspect it's too simple for what you'd want, but I'm mentioning it on the off-change that it's useful.

@brandtbucher
Copy link
Member

Sorry, late to the discussion. I've been farming some pattern matching microbenchmarks for a while now over at https://github.com/brandtbucher/patmaperformance. All it's really missing is coverage of mapping patterns.

I'm trying to avoid using totally artificial code, while still keeping the time mostly spent just doing pattern-matching stuff.

@mdboom mdboom moved this from In Progress to Done in Fancy CPython Board Jul 7, 2022
@mdboom
Copy link
Contributor Author

mdboom commented Jul 7, 2022

I'm marking this as done for now. If we get a lot of value out of this, we may want to automate publishing these results, but I'm going to consider that a different, follow-on work item.

@mdboom mdboom closed this as completed Jul 26, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Development

No branches or pull requests

4 participants