Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reducing memory allocations in read RTCP methods #185

Draft
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

tyohan
Copy link

@tyohan tyohan commented Sep 19, 2024

I'm profiling Inlive SFU to see how we can reduce the CPU usage and found out the ReadRTCP memory is keep allocating new memory for each RTCP packet and put a high pressure on the GC.

image

So I create a benchmark test on unmarshal method and to see if using sync.Pool and some other improvement can make a significant impact and the result comes with this below:

Before using sync.Pool

BenchmarkUnmarshal-8   	 2621134	       414.3 ns/op	     584 B/op	      16 allocs/op

After using sync.Pool

BenchmarkUnmarshal-8   	 3275593	       356.6 ns/op	     384 B/op	      10 allocs/op

@Sean-Der
Copy link
Member

Sean-Der commented Oct 3, 2024

Sorry I missed this @tyohan !

I added you to the org so you can create branches/dev easier.

Copy link

codecov bot commented Oct 3, 2024

Codecov Report

Attention: Patch coverage is 88.63636% with 10 lines in your changes missing coverage. Please review.

Project coverage is 79.05%. Comparing base (26bcda5) to head (d76d213).

Files with missing lines Patch % Lines
packet.go 82.50% 7 Missing ⚠️
compound_packet.go 0.00% 3 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master     #185      +/-   ##
==========================================
+ Coverage   78.50%   79.05%   +0.55%     
==========================================
  Files          22       22              
  Lines        2005     2072      +67     
==========================================
+ Hits         1574     1638      +64     
- Misses        336      339       +3     
  Partials       95       95              
Flag Coverage Δ
go 79.05% <88.63%> (+0.55%) ⬆️
wasm 79.05% <88.63%> (+0.55%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@Sean-Der
Copy link
Member

Sean-Der commented Oct 3, 2024

What happens if users don’t call Release

I am 100% in support of performance changes, as long as they don’t impact existing users!

@tyohan
Copy link
Author

tyohan commented Oct 4, 2024

Thank you for adding me to the org @Sean-Der

So far from my understanding of sync.Pool, without release the packet or we don't put it back to the pool it will treated just like the current implementation. The memory will be release once there is no reference to it. But if it put back to the pool, it will reuse once there is a request to the new packet.

I still trying to figure out what the best way to implement this without impacting the Pion API. I'll request more feedback from Pion slack channel.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

2 participants