-
-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How we Visualize Benchmarks #117
Comments
you lost me at "free for open source" I will generally push back against anything which isn't actually open source, especially if it's a hosted service at the whims of VC (I am assuming). |
Like many of GH's tools? Or RTD? Or Pre-commit.ci? 😁 I think most of our CI is in the category "free for open source". |
Yes, but also like Travis CI 😜 |
I'm just not sure where we would go from there. We should definitely vet our tools. But if we fundamentally don't trust anything in industry then we shouldn't use GH Actions under the free-for-open-source tier (we do), RTD under the free-for-open-source tier, Pre-commit under the same, encourage new users to use GH virtual development environments, CircleCI, etc. Our actual inert code is one of the few things that isn't in a free-for-open-source tier. And Codspeed is kind of like codecov, in that it is a user-friendly layer on top of open-source tooling, like pytest-benchmark. |
We actually pat for ReadtheDocs because we think they need the money, plus we wanted something fomr them, but I now forget what. Details are here astropy/astropy-project#105 |
Personally, I'm less worried about something like codespeed.io going away. It might be helpful, but it's not critical. If it goes away, we're just where we are now without regular benchmarks. That's different from e.g. github itself; while we could move to bitbucket or gitlab, that would be a lot more disruptive. So the question is how much effort it would be to set up. If someone can set it on in 4 h, and they start charging, we only loose 4 h of work. If it's a lot more effort to set up an maintain, then we should be more careful in vetting. |
https://docs.codspeed.io/#how-long-does-it-take-to-install
|
I don't think this replaces the part where we run the benchmark for every commit on |
https://news.ycombinator.com/item?id=36682012
|
Throwing out a suggestion. Rather than ASV I recommend https://docs.codspeed.io/.
codspeed is really good, free for open-source, and integrates deeply with pytest (see pytest-benchmark). With codspeed + pytest ecosystem it's easy for use to have a specific benchmark test suite and also have benchmarked tests in the normal test suite.
The text was updated successfully, but these errors were encountered: