You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 22, 2020. It is now read-only.
The underlying time series backend gets under pressure due to high cardinality on the tags (metadata that characterises the metrics) associated with check results. The current implementation processes unlimited number of tags and forwards them to the time series backend for storage.
Check results are provided to data-service at per entity level containing tags produced as per check definition. There are checks which produce large number of tags(key results) and sometimes with unique name every execution resulting in explosion in tag cardinality.
This issue is created to validate hypothesis and implement that the tags cardinality (and consequently pressure on time series backend) can be reduced significantly by putting a rate limit on tags processed in data-service layer.
The text was updated successfully, but these errors were encountered:
The underlying time series backend gets under pressure due to high cardinality on the tags (metadata that characterises the metrics) associated with check results. The current implementation processes unlimited number of tags and forwards them to the time series backend for storage.
Check results are provided to data-service at per entity level containing tags produced as per check definition. There are checks which produce large number of tags(key results) and sometimes with unique name every execution resulting in explosion in tag cardinality.
This issue is created to validate hypothesis and implement that the tags cardinality (and consequently pressure on time series backend) can be reduced significantly by putting a rate limit on tags processed in data-service layer.
The text was updated successfully, but these errors were encountered: