-
Notifications
You must be signed in to change notification settings - Fork 765
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[bug] - Memory leak #5922
Comments
For tracking : LGouellec/streamiz#361 |
This looks like a new Meter of same name is created right after disposing it? Not sure what is the intend here. |
It's just a reproducer, my initial use case is to expose OpenTelemetry metrics for a Kafka Streams application. Don't know if you are familiar with Kafka, but the assignment of partitions can change, so it means you have to drop some metrics in some cases. Today, we can't drop a metric for a specific Meter instance. So that's why for now, I dispose the current Meter instance, and recreate from scratch for each iteration. With that, I don't need to drop metrics, just not exposing the old metrics in the next iteration. My point here is when the |
Is there a way in OpenTelemetry to clear and dispose all the resources without closing the |
Hey @LGouellec! I spent some time on this tonight. Here is what I think is going on...
The problem is in your repro there you are using Do me a favor and switch your code to Should we do something here? I'm not sure what we could do. Open to ideas. But I will say, what you are proving by constantly disposing |
Hello @cijothomas , I tried with the meterProviderBuilder.AddOtlpExporter(options => {
options.Protocol = OtlpExportProtocol.Grpc;
options.ExportProcessorType = ExportProcessorType.Batch;
}); Same results, metrics are correctly exporting in the OtelCollector, but the memory keep growing locally due to a unreleased MetricPoint : One question is remaining, Why you don't clear the metric point when you receive the notification from |
As clarified here, Metric points are cleared after the next export only, not immediately upon Meter dispose, as we still want to export pending updates before throwing away MetricPoints. Are you seeing that OpenTelemetry sdk is not releasing MetricPoints even after export? |
Yes exactly, the memory keep growing, the number of |
Could you confirm that Meter is disposed as well? If Meter is not disposed, then MetricPoints are not freed up by default. |
Running this: public static void Main()
{
Meter? meter = null;
var meterProviderBuilder = Sdk
.CreateMeterProviderBuilder()
.AddMeter("Test")
.SetResourceBuilder(
ResourceBuilder.CreateDefault()
.AddService(serviceName: "Test"));
meterProviderBuilder.AddOtlpExporter((exporterOptions, readerOptions) =>
{
readerOptions.PeriodicExportingMetricReaderOptions.ExportIntervalMilliseconds = 5000;
});
using var meterProvider = meterProviderBuilder.Build();
while (true)
{
meter?.Dispose();
meter = new Meter("Test");
var rd = new Random();
meter.CreateObservableGauge(
"requests",
() => new[]
{
new Measurement<double>(rd.NextDouble() * rd.Next(100)),
},
description: "Request per second");
// will see after couple of minutes that the MetricReader contains a lot of MetricPoint[], even if we dispose the Meter after each iteration
Thread.Sleep(200);
}
} Seems to be doing its job nicely: The red bars on "Object delta (%change)" are objects being collected. |
@CodeBlanch Running the same snippet of code, and the number of MetricPoint[] is huge no ? 54Mb just for MetricPoint[] Running .NET 8.0 @cijothomas |
@CodeBlanch GC.Collect(); help to reduce the memory footprint : Example: public static void Main()
{
Meter? meter = null;
var meterProviderBuilder = Sdk
.CreateMeterProviderBuilder()
.AddMeter("Test")
.SetResourceBuilder(
ResourceBuilder.CreateDefault()
.AddService(serviceName: "Test"));
meterProviderBuilder.AddOtlpExporter((exporterOptions, readerOptions) =>
{
readerOptions.PeriodicExportingMetricReaderOptions.ExportIntervalMilliseconds = 5000;
});
using var meterProvider = meterProviderBuilder.Build();
while (true)
{
meter?.Dispose();
GC.Collect();
meter = new Meter("Test");
var rd = new Random();
meter.CreateObservableGauge(
"requests",
() => new[]
{
new Measurement<double>(rd.NextDouble() * rd.Next(100)),
},
description: "Request per second");
// will see after couple of minutes that the MetricReader contains a lot of MetricPoint[], even if we dispose the Meter after each iteration
Thread.Sleep(200);
}
} But I have a question, I have running my application more than 15 minutes, and at some point it seems that the metrics are no longer exported, do you know what's going on ?
Why the "request" metric stops being emitted while the .Net application is running : |
Package
OpenTelemetry
Package Version
Runtime Version
net8.0
Description
I need to
Dispose(..)
often myMeter
instance, because today Meter doesn't provide a way to delete an existing metric dynamically. See : dotnet/runtime#83822.When I hit
meter.Dispose()
, the dotnet runtime will notify each instrument perviously created that they are not longer published.See: https://github.com/dotnet/runtime/blob/9e59acb298c20658788567e0c6f0793fe97d37f6/src/libraries/System.Diagnostics.DiagnosticSource/src/System/Diagnostics/Metrics/Meter.cs#L519
The OpenTelemetry
MetricReader
(in my caseBaseExportingMetricReader
) must deactivateMetric :See:
opentelemetry-dotnet/src/OpenTelemetry/Metrics/Reader/MetricReaderExt.cs
Line 30 in 5dff99f
But it seems there is a memory leak, because when I run my program for a long time, the memory still growing. When I debug my app, I see a lot of MetricPoint[] instances unreleased. More time my application run, more memory consumption is required only for MetricPoint[] instances.
It seems when
meter.Dispose()
is called, the open telemetry metrics are not released properly.Screen of my debugger at 10 min uptimes :
Screen of my debugger at 60 min uptimes (122 Mb of MetricsPoint[] )
Steps to Reproduce
Expected Result
All perviously MetricPoint released
Actual Result
Memory leak
Additional Context
No response
The text was updated successfully, but these errors were encountered: