-
Notifications
You must be signed in to change notification settings - Fork 79
Cache seen resources for each export request #179
Conversation
stats.go
Outdated
@@ -106,6 +108,7 @@ func newStatsExporter(o Options) (*statsExporter, error) { | |||
createdViews: make(map[string]*metricpb.MetricDescriptor), | |||
protoMetricDescriptors: make(map[string]*metricpb.MetricDescriptor), | |||
metricDescriptors: make(map[string]*metricpb.MetricDescriptor), | |||
seenResources: make(map[*resourcepb.Resource]*monitoredrespb.MonitoredResource), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To avoid extra memory usage, can we cache this for every export request. Every export request presumably will have >20 timeseries so we don't get as much win as we would get with this change, but at least we do not OOM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make sense to me, updated in 8800548.
This may not be true for the agent/collector. May get up to millions of resource. |
@@ -299,7 +308,8 @@ func (se *statsExporter) uploadMetricsProto(payloads []*metricProtoPayload) erro | |||
|
|||
var allTimeSeries []*monitoringpb.TimeSeries |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we call into ExportMetricsProtoSync from this point? Maybe in a different PR.
To avoid transforming the resource proto over and over again.
There should be a fairly small amount of unique resources so caching shouldn't be a big problem for memory usage.