-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create computational scaling plots with py-test benchmark and post to github actions artifact #24
Conversation
Right now making matplotlib function to take the JSON output of pytest-benchmark and turn it into runtime plots, thoughts on this @lpsinger? Errorbars are +/- a standard deviation, as reported in stats. Right now I'm not curve fitting any O(whatever) to them since for each test, it might be different, but if you want, I could set up something with scipy.optimize to see what has the best fit. Oh, and this isn't in the commit yet since it's just me messing around in a jupyter notebook, but the code that does it is in the screenshot below. It does rely on the specific structure of some parts of the JSON, but I imagine that's okay since it's always generated by pytest-benchmark |
This is a nice starting point. I'd definitely make the axes log-log. Something to consider is whether you implement this as a post processing script or as a pytest plugin using hook functions. There are advantages to both. |
Definitely log-log for now. If you want you could try adding nomogram axes to guide the eye for an N log N fit. |
If I want to do this using hook functions and plugins, I think I will have to edit pytest-benchmark in some way, since all of the runtime stuff lives within that, I'm taking a look at the pytest-benchmark code on github now, but I think it's going to be much more straightforward to write a post-processing script that is called by build-and-test.yml |
Yup, sounds good to me. Although if it is helpful I can meet with you virtually to try to help you grok whatever is not grokked about pytest hooks. |
…ors, but the errors it throws should be insightful
Uh oh... if the bugs themselves are insightful, then you've created Skynet! |
I think I'm fine for now, since I don't anticipate using them, but if it comes up I've got to, then I would definitely appreciate that.
Haha, indulging in maybe a bad habit of mine when I'm not sure how to do something, do the most naive implementation possible & if it fails, google the error message. Basically doing Cunningham's Law with extra steps |
It's not a bad habit at all. I was just joking on the ambiguity between the errors being insightful versus the errors bringing you insight. |
Ah yes. I welcome our robot overlords 🤖 |
…t it is done in and maybe if output.json is there
🙇 🤖 🔥 |
…vironment but this time for real?
Well, it is making the plots? Tomorrow is me diving deep into the github api documentation to really figure out how to have it post them as comments |
…le.log to debug why it's not commenting
…t it is done in and maybe if output.json is there
…vironment but this time for real?
…le.log to debug why it's not commenting
After running into a bunch of git issues thanks to some short-sighted development on main, I have moved this to #28 |
At the moment I am just getting started on seeing if you can use pytest-benchmark to produce computational scaling plots, and to that end, I'm making a PR so I can test functionality things. Don't expect this to be merged any time soon.