You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanos Compactor contains a lot of duplicated blocks, they are different TSDB blocks with same data but different ULID, we only have 1 compactor to do the compaction work. One thing might be related to this is we break compactor to two statefulset, 1 relabel drop replica labels and relabel containsreplica` labels.
Thanos, Prometheus and Golang version used:
goversion: go1.21.7
version: 0.34.0
Object Storage Provider: S3
What happened: duplicated blocks
What you expected to happen: only 1 block per time range
How to reproduce it (as minimally and precisely as possible): not sure
Full logs to relevant components:
Anything else we need to know:
The text was updated successfully, but these errors were encountered:
Actually our issue is probably this one: #7488 so probably ignore below.
We're having a similar problem, we're running several compactors but separated correctly by either label or time. We're having an issue where a single compactor (same pod running for 14days) has downsampled the same block twice (for many blocks) The local meta-syncer has meta data for both blocks and reports 0 differences apart from the ULID.
Thanos Compactor contains a lot of duplicated blocks, they are different TSDB blocks with same data but different ULID, we only have 1 compactor to do the compaction work. One thing might be related to this is we break compactor to two statefulset, 1 relabel drop
replica
labels andrelabel contains
replica` labels.Thanos, Prometheus and Golang version used:
goversion: go1.21.7
version: 0.34.0
Object Storage Provider: S3
What happened: duplicated blocks
What you expected to happen: only 1 block per time range
How to reproduce it (as minimally and precisely as possible): not sure
Full logs to relevant components:
Anything else we need to know:
The text was updated successfully, but these errors were encountered: