-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
compact: consider handling duplicate labels, or continuing on error #497
Comments
+1 |
So.. would you rather don't care about all of those issues? Even if they will after 1000 compactions and downsampling operations grow to serious problem that is unfixable and you would need to delete couple month of data? That can happen if you are feeding some malformed block into compaction logic when compaction was never tested to compact block with such thing. I doubt this issue will cause such a serious cause so what needs to be done is a short unit test if we can compact those blocks and what it looks like afterwards. And if all good then we can switch to soft notification -> a metric and log line and contining the compaction work (: |
See https://improbable-eng.slack.com/archives/CA4UWKEEN/p1539959413000100 You need to apply relabelling to get rid of
|
I have a similar issue with time series produced by
This leads to a timeseries with the following labels:
which produces this error:
Sorry I'm probably asking in the wrong place. Are duplicate labels allowed in the Prometheus format? (and is this therefore a bug in kube-state-metrics or thanos?) EDIT: When I query Prometheus for this timeseries, Prometheus shows only one of the 2 duplicate labels. |
Would be fixed by #848 right? |
Hi @sbueringer , I believe that yes. It should fix the issue with the same label names. The only issue that persists is first letter uppercase in label names which should be possible to overcame when #953 is merged, but that's different case. Could you please try it out with current master if the issue is resolved? Thanks! |
Sorry for the late answer. I verified it with the current Thanos version and the issue is fixed. So in my opinion you can close the issue |
Great to hear, thanks for verifying! |
Thanos, Prometheus and Golang version used
thanos, version 0.1.0-rc.2 (branch: HEAD, revision: 53e4d69)
build user: root@c7199d758b5e
build date: 20180705-12:54:50
go version: go1.10.3
prometheus, version 2.3.2 (branch: HEAD, revision: 71af5e29e815795e9dd14742ee7725682fa14b7b)
build user: root@5258e0bd9cc1
build date: 20180712-14:02:52
go version: go1.10.3
What happened
Thanos compact logs the following error and then exits with status 1
note the duplicate ns="config.actionlog"
What you expected to happen
Compactor to de-dup the labels (?), or simply log the error and move on to other work
How to reproduce it (as minimally and precisely as possible):
I'm still trying to figure out how this happened. Maybe a bad scrape target. It seems to have stopped happening on my environment. But, if I intentionally create a bad scrape target with something like:
Prometheus will show two different timeseries with identical labels when I query for
bad_metric
The text was updated successfully, but these errors were encountered: