You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi 👋 I'm not sure wether this makes sense as an issue or should be a discussion instead: to project maintainers, feel free to move this to a discussion if you conclude that's better.
Context: in the Granian project (an HTTP server) we recently introduced some e2e benchmarks using different Python versions, which show a ~30% performance degradation for some tests when comparing Python 3.10 to all other versions onwards (PyO3 0.22, cfg pyo3_disable_reference_pool).
Now, the specific tests showing this degradation involves some relatively simple code:
While I understand an e2e benchmark might suffer from a lot of additional noise when compared to a smaller unit-benchmark, and there's a lot more to consider (network stack in CPython stdlib, for example), I also believe, given other protocols involving asyncio and a bunch more stuff suffer from a very smaller degradation compared to the one I referenced, there might be something going on in PyO3 <-> CPython interop. Thus I have two main questions:
is there any well-known difference from Python 3.11 and onwards in how PyO3 interact with the Python interpreter that might explain this?
do you have any suggestions on how to investigate this in a more fine-grained way to help enlighten any other differences between Python versions that might play a role in this?
Thanks in advance 🙏
The text was updated successfully, but these errors were encountered:
gi0baro
changed the title
Noticeable performance downgrade from Python 3.10 to onwards version
Noticeable performance downgrade from Python 3.10 to onwards versions
Nov 18, 2024
Hi 👋 I'm not sure wether this makes sense as an issue or should be a discussion instead: to project maintainers, feel free to move this to a discussion if you conclude that's better.
Context: in the Granian project (an HTTP server) we recently introduced some e2e benchmarks using different Python versions, which show a ~30% performance degradation for some tests when comparing Python 3.10 to all other versions onwards (PyO3 0.22, cfg
pyo3_disable_reference_pool
).Now, the specific tests showing this degradation involves some relatively simple code:
pyclass
and aPyDict
objects (https://github.com/emmett-framework/granian/blob/c94e73e32a4865a011a4b659ef04bbc0a96e6fd4/src/wsgi/callbacks.rs#L24-L106)pyclass
object (https://github.com/emmett-framework/granian/blob/c94e73e32a4865a011a4b659ef04bbc0a96e6fd4/src/wsgi/io.rs#L49-L57)While I understand an e2e benchmark might suffer from a lot of additional noise when compared to a smaller unit-benchmark, and there's a lot more to consider (network stack in CPython stdlib, for example), I also believe, given other protocols involving asyncio and a bunch more stuff suffer from a very smaller degradation compared to the one I referenced, there might be something going on in PyO3 <-> CPython interop. Thus I have two main questions:
Thanks in advance 🙏
The text was updated successfully, but these errors were encountered: