Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Discussion] Cache instances of MqttClientPublishResult #1325

Closed
wants to merge 3 commits into from

Conversation

gfoidl
Copy link
Member

@gfoidl gfoidl commented Jan 4, 2022

Continuing the profiling session from #1323 and #1324 this PR caches instances of MqttClientPublishResult, so that these 100.000 allocations are gone.

Note: in order to make this work, the client user code needs to be adapted:

-await mqttClient.PublishAsync(publishMessage, stoppingToken);
+using var result = await mqttClient.PublishAsync(publishMessage, stoppingToken);

caching_01
(Same test program for profiling as in the linked PR on top)

The same technique for caching -- or other maybe better caches -- could be expanded.
For "internal" types (they are all public) it shouldn't have any user visible effect. I tried it for MqttPublishPacket, but didn't know when the instances aren't needed anymore, so wasn't able to return them to the cache. Thus this comit got reverted.


Caching / pooling is quite hard to get right, and in essence the GC also can be seen as pool where we rent objects from. Or put in another view: with pooling we create a custom memory allocator.
So it's always a trade-off when we rent objects from our own pool -- especially as we have to return them to our pool to take advantage of pooling, which in case of "GC-pool" the GC does this on our behalf by scanning the objects-roots.

I'm not sure if this change is good or not, so I badged the title with "discussion" and made the PR a draft one, so we can see where it goes.

This reverts commit 703bfb64b24cf1c499e0b6150c84cfd239049723.

I don't know (yet) when the instance is good to be returned to the pool, so the commit is reverted.
@chkr1011
Copy link
Collaborator

Interesting approach. But I am afraid that we might get cross thread access issues. Lets say the result is also consumed on other threads (like UI threads) the values might get cleared (in the Dispose call) from the receiving thread. So the user must copy those values as soon as possible.

@gfoidl
Copy link
Member Author

gfoidl commented Jan 19, 2022

For the MqttClientPublishResult this shouldn't be problem, as it depends on when the instance gets disposed.
But now I don't like the approach taken for MqttClientPublishResult as it's very unhandy.

The shown approach may be interesting for "internal types" like MqttPublishPacket, that aren't user facing, so the control of creation / rental and disposal / return to pool can be controled better.

@chkr1011
Copy link
Collaborator

@gfoidl I review some places in the v4 branch and managed to remove a lot of "useless" allocations. Not the MqttClientPublishResult but a huge amount of others. Now the memory footprint of my benchmark dropped from 1.6 GB to 400 MB.

Please let me know what you think. Do you have more recommendations?

@gfoidl
Copy link
Member Author

gfoidl commented Feb 10, 2022

I can do some profiling on v4 branch next week, and let you know then.

@chkr1011
Copy link
Collaborator

I will close this discussion for now because a lot of other classes were eliminated and overall memory consumption is optimized. But we can still discuss this topic in the future.

@chkr1011 chkr1011 closed this Feb 14, 2022
@gfoidl gfoidl deleted the client_publish_caching branch February 17, 2022 12:02
@gfoidl
Copy link
Member Author

gfoidl commented Feb 17, 2022

Looks much better now
grafik

Please let me know what you think. Do you have more recommendations?

Need to dig through the code (changes) first. When I get some ideas, I'll open an issue / PR to discuss them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants