-
Notifications
You must be signed in to change notification settings - Fork 515
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Transport causes CPU Spikes on Larger Envelopes #2384
Comments
possible urllib3 stuff |
I did some profiling today: (because sending is done in a worker thread, I profiled them on my local machine because in sentry we sould have not enough samples) Profling setup: I used a) send 10 envelopes with big attachments (10000 hello worlds): b) send 50 envelopes with big attachments (10000 hello worlds): c) send 50 envelopes with small attachments (10 hello worlds): So there is nothing that jumps out. Most of the time is spent in the http lib (which was kind of our guess). I tried to disable SSL certificate verification, but this made not difference at all. One thing of not is that sending 10 envelopes takes around 350ms and sending 50 envelopes takes 2500ms (no matter what size the attachments) So that is not linear growth there. We could create an AsyncHttpTransport (swapping out urllib3 with aiohttp) to see if the result changes drastically. But this is not done in a day. |
Another note. I also tracked the cpu utilization of my test process. It maxed out at between 54% (10 small envelopes) 56% (10 large envelopes), and 61% (50 large envelopes). So I could not reproduce a cup utilization spike. |
Ok, after same debugging I did not find the root cause, so I also consulted with the admins of the urllib3 discord and one thing the immediately told us that we should increase the pool size. Another thing was that decoding TLS traffic uses cpu but there is a feature to turn it of. This is what I did. It did not got rid of all the CPU spikes in my tests setup but instead of 1 in 3-4 runs, the cpu only spikes in 1 in 7-8 runs. So it is better now. |
Another thing of notice: OpenSSL 3 has performance problems: python/cpython#95031 |
How do you use Sentry?
Sentry Saas (sentry.io)
Version
1.31
Steps to Reproduce
Sending this to Sentry causes the transport to create CPU spikes that are very visible in our kubernetes pods.
It also happens if gzip compression is disabled. It appears to be related to how
capture_envelope
works within the transport.Expected Result
I would not expect the CPU usage to spike.
Actual Result
It spikes.
The text was updated successfully, but these errors were encountered: