-
-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How we do benchmarking #120
Comments
As I very recently discovered the big feature asv has which pytest-benchmark does not is the ability to do memory usage benchmarking as well. |
https://github.com/bloomberg/pytest-memray is a thin wrapper connecting https://github.com/bloomberg/memray (12K stars) to pytest. Timing tests and memory benchmark tests are often two different tests, so IMO it's fine to use 2 popular tools, one for each. |
Still not clear to me whether the pytest way can benchmark over a long period of time like asv can. |
There is native capability: https://pytest-benchmark.readthedocs.io/en/latest/comparing.html, but this is where #117 also comes in to ingest that saved data from |
Given the dissatisfaction with Codecov since it was bought out, I am still unsure... |
But with common open-source reports we can switch visualization methods. For codecov the tools use the same standard and we could switch to |
What other options for actually visualising the data exist? If there are already well maintained options I would feel a lot happier about it. On the memory vs timing thing, while I agree the benchmarks are likely to be different, running two different pytest plugins, and (presumably?) two different visualisation / reporting tools feels like more effort? Maybe that's worth it, but I am not familiar with it all enough to know. |
Just to put it on the table, if we wanted we could use pytest-benchmark to define/run the benchmarks, and use asv for visualization - see discussion here - basically 'just' need to convert pytest-benchmark json to asv json. |
Interesting comment in that discussion |
Pandas also has a good discussion about asv pandas-dev/pandas#45049 |
Looks like speed.python.org uses codespeed, not to be confused with https://codspeed.io (which apparently must be to measure the speed of fish) |
Is there a reason, beyond historical, that we use
ASV
overpytest-benchmark
?Looking at the two tools,
pytest-benchmark
has 1.5x more stars and 8x greater usage (as measured by GH dependencies). Alsopytest-benchmark
integrates into our existing pytest framework so that this repo might pull tests directly fromastropy
's test suite, using apytest
Mark (e.g.pytest.mark.benchmark_only
).The text was updated successfully, but these errors were encountered: