-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use process_time
instead of just time
for measuring test performance.
#1413
Use process_time
instead of just time
for measuring test performance.
#1413
Conversation
…nce. I hope this removes a handful of random test fails on supposedly busy cloud runners. `thread_time` https://docs.python.org/3/library/time.html#time.thread_time could help with parallel test runs, too, but that only is available since Python 3.7. process_time is available since Python 3.3.
Codecov ReportBase: 94.19% // Head: 94.19% // No change to project coverage 👍
Additional details and impacted files@@ Coverage Diff @@
## main #1413 +/- ##
=======================================
Coverage 94.19% 94.19%
=======================================
Files 28 28
Lines 5085 5085
Branches 968 968
=======================================
Hits 4790 4790
Misses 176 176
Partials 119 119
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
This is a Just in time fix! |
@mergezalot, |
Simply skip on python < 3.7
@pubpub-zz educated guess: either parallel tests, super busy runners, or low performance runners. I tried a different approach now: |
The test before was to brittle. We need to keep an open eye to the benchmarks in future, but also be careful with interpreting the numbers. Credits to mergezalot in PR #1413
The test before was to brittle. We need to keep an open eye to the benchmarks in future, but also be careful with interpreting the numbers. Credits to mergezalot in PR #1413
I've added this test as a benchmark: https://py-pdf.github.io/PyPDF2/dev/bench/ In the following test you can see that the performance is very different from run to run: You will get one datapoint for every future commit in |
@mergezalot Thank you for your work on this topic. However, I think timing is only suitable for distinguishing orders of magnitude (e.g. as for |
Would it be ok to close this PR? |
@MartinThoma yes, let's close this PR. Benchmarking is much better. Thank you. |
I hope this removes a handful of random test fails on supposedly busy cloud runners. At least it is worth a try.
thread_time
https://docs.python.org/3/library/time.html#time.thread_time could help with parallel test runs, too, but that only is available since Python 3.7.process_time
is available since Python 3.3.This PR is a follow up to