You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It works fine but when you want to make your prometheus highly-available by adding a new prometheus instance, it induces the data to be stored twice into elasticsearch.
It could be nice to add a feature to deduplicate and merge the data before storing it into elasticsearch.
Thanos does the de-duplication and merge, but does not store data into elasticsearch.
Thank a lot !
Regards.
The text was updated successfully, but these errors were encountered:
I finally had some time to explore Thanos but I see deduplication happens in the Query component thus they suffer from the same problem of storing duplicated data. I see there was a recent attempt at making the Compactor component to do offline deduplication (this would speed up queries and save storage) but it's stalling.
I'm afraid there isn't a simple workaround for this as deduplication should happen in some component ingesting (and possibly caching) all the samples from all the HA nodes and decide what to drop but happy to keep this open and discuss possible implementation strategies!
Talked about this internally, we agreed that given the complexity we won't implement this feature in beats anytime soon but using a script processor might be a good solution to dedupe at ingest time.
Hello guys,
I was trying to implement the solution provided on your blog by @sorantis : https://www.elastic.co/blog/prometheus-monitoring-at-scale-with-the-elastic-stack
It works fine but when you want to make your prometheus highly-available by adding a new prometheus instance, it induces the data to be stored twice into elasticsearch.
It could be nice to add a feature to deduplicate and merge the data before storing it into elasticsearch.
Thanos does the de-duplication and merge, but does not store data into elasticsearch.
Thank a lot !
Regards.
The text was updated successfully, but these errors were encountered: