You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a computer with a AMD Ryzen 7 1800X with 8 cores (16 logical CPUs) and 16Go of ram. My project is a flask website with ~400 tests, most of them testing the generated pages with webtest, or the models with an in-memory temporary database. There is almost no i/o.
Here are the results for launching one or all my tests, with -n0, -n1 or -nauto. The one test duration is ~0.7s, and this is the slowest. All the tests pass. In the following table there is the data displayed by pytest, then the data displayed by the time command (i.e. time pytest -nauto).
command
pytest -n0
pytest -n1
pytest -nauto/-n16
1 test
pytest: 0.94s time: 4.4s
pytest: 4.51s time: 8.04s
pytest: 13.38s time: 16.85s
400 tests
pytest: 60.70s time: 62.28s
pytest: 63.92s time: 67.45s
pytest: 21.72s time: 25,36s
What I read in those data:
There is a difference of ~4s between the duration announced by pytest, and the duration announced by the system. I suppose it is not really pytest-xdist, but what is the difference due to?
Passing from -n0 to -n1 costs between ~3.5s and ~5s. I understand that spawning processes can be costly, but 3.5s seems a lot, especially when only a few tests are ran.
With 1 test, -n16 is ~8s slower than -n1. As 15 workers won't run any test, I guess that the 8 additional seconds are lost by spawning the 15 useless workers. By why spawning n workers is so much longer than spawning a single worker? Maybe related to Do not spawn more workers than testcases #272
I found the most annoying point is the spending of 3.5s for spawning the first worker. Is there some way this could be accelerated (either by configuration, or by a patch)?
For instance, what would you think of some kind of pytest agent? That would be: a pytest agent running in the background, that would run idling workers, and when a user would launch tests, they would be sent to the workers, and the workers would not stop after.
What do you think?
The text was updated successfully, but these errors were encountered:
Is there some way this could be accelerated (either by configuration, or by a patch)?
3.5s seems excessive. pytest itself shouldn't take that long, this should be profiled to figure out why this is taking so much time.
For instance, what would you think of some kind of pytest agent?
It is an interesting idea, it has quite a few challenges though: when files change, each background work would need to recollect tests (preferably only on changed files). Also the main worker would need to connect to the background workers.
I have a computer with a AMD Ryzen 7 1800X with 8 cores (16 logical CPUs) and 16Go of ram. My project is a flask website with ~400 tests, most of them testing the generated pages with webtest, or the models with an in-memory temporary database. There is almost no i/o.
Here are the results for launching one or all my tests, with
-n0
,-n1
or-nauto
. The one test duration is ~0.7s, and this is the slowest. All the tests pass. In the following table there is the data displayed by pytest, then the data displayed by thetime
command (i.e.time pytest -nauto
).pytest -n0
pytest -n1
pytest -nauto/-n16
time: 4.4s
time: 8.04s
time: 16.85s
time: 62.28s
time: 67.45s
time: 25,36s
What I read in those data:
-n0
to-n1
costs between ~3.5s and ~5s. I understand that spawning processes can be costly, but 3.5s seems a lot, especially when only a few tests are ran.-n16
is ~8s slower than-n1
. As 15 workers won't run any test, I guess that the 8 additional seconds are lost by spawning the 15 useless workers. By why spawning n workers is so much longer than spawning a single worker? Maybe related to Do not spawn more workers than testcases #272-n16
is not 16 times faster than-n1
, but only ~2.7 times faster. I understand that there will never be a 16x improvement, but 2.7 is a bit disappointing. Maybe related to Performance idea:--sf
/--slow-first
option to improve resource utilization #657I found the most annoying point is the spending of 3.5s for spawning the first worker. Is there some way this could be accelerated (either by configuration, or by a patch)?
For instance, what would you think of some kind of pytest agent? That would be: a pytest agent running in the background, that would run idling workers, and when a user would launch tests, they would be sent to the workers, and the workers would not stop after.
What do you think?
The text was updated successfully, but these errors were encountered: