You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
We have a project that uses a high number of streams short streams over an extended period of time. This causes high memory usage that forces us to restart the service once a day to keep memory usage in check. This seems to stem from the fact that we only register one EventStoreClient as a singleton in DI. The client does not seem to completely clean up everything after each stream has finished.
We are noticing that the memory growth comes from the usage of the grpc client. Where a HashSet of ActiveCalls continues to grow. It is supposed to be cleaned up when disposing of the grpc client but that is never done by the EventStore Client as far as I can see. For more information see Config/Logs/Screenshots below. We have tired many other lifetimes for the EventStore Client but these seem to cause other problems.
We can see in the EventStore database logs that the subscriptions are being disposed on that end so we know we are disconnecting from them.
To Reproduce
Steps to reproduce the behavior:
Register EventStoreClient as singleton as reccomended in the documentation.
Subscribe to a very high number of streams over an extended time.
Cancel the CancellationToken sent into the stream subscription and let it be garbage collected.
Watch memory usage of service grow.
I have created a minimum example that displays the behavior. It is meant to have two running instances, one master and one slave. It creates very many streams and sends ping pong between the services.
The master create a stream by publishing a ping event and then subscribes to this stream. The slave subscribes to the stream waits for a ping event and publishes a pong event. When the master receives a pong it cancels the subscription and starts the process again.
Additional context
If there is any other way to use the client which does not lead to memory growth like in the example above any pointers or other help is highly appreciated.
The text was updated successfully, but these errors were encountered:
Describe the bug
We have a project that uses a high number of streams short streams over an extended period of time. This causes high memory usage that forces us to restart the service once a day to keep memory usage in check. This seems to stem from the fact that we only register one EventStoreClient as a singleton in DI. The client does not seem to completely clean up everything after each stream has finished.
We are noticing that the memory growth comes from the usage of the grpc client. Where a HashSet of ActiveCalls continues to grow. It is supposed to be cleaned up when disposing of the grpc client but that is never done by the EventStore Client as far as I can see. For more information see Config/Logs/Screenshots below. We have tired many other lifetimes for the EventStore Client but these seem to cause other problems.
We can see in the EventStore database logs that the subscriptions are being disposed on that end so we know we are disconnecting from them.
To Reproduce
Steps to reproduce the behavior:
I have created a minimum example that displays the behavior. It is meant to have two running instances, one master and one slave. It creates very many streams and sends ping pong between the services.
The master create a stream by publishing a ping event and then subscribes to this stream. The slave subscribes to the stream waits for a ping event and publishes a pong event. When the master receives a pong it cancels the subscription and starts the process again.
This can be found here: Test project reproducing problem. Specifically look at Orchestrator.cs for behavior.
Expected behavior
The memory usage does not grow over time even when subscribing to many new streams.
Actual behavior
Memory usage grows over time as subscriptions are not cleaned up completely.
Config/Logs/Screenshots
Here is a picture from dotMemory where you can see a high memory usage because alot of old ActiveCalls are still in memory.
Plot of how memory usage increases over number of streams. Collected from dotnet-counters.
dotnet-dumps that showcase the high memory usage of the minimum example can be found here. Where specifically the 97 000 streames have high memory usage.
EventStore details
EventStore server version: 21.10.0
Operating system: Windows 10 / Ubuntu 18.04
EventStore client version (if applicable): grpc client 22.0.0
Additional context
If there is any other way to use the client which does not lead to memory growth like in the example above any pointers or other help is highly appreciated.
The text was updated successfully, but these errors were encountered: