-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
store:/api/v1/label/<name>/values matchers not work #6878
Comments
@R4scal can you share you whole datasource configuration? Also, please try setting the type of "Prometheus" behind it to Thanos. |
The problem not in grafana, direct queries to prometheus works fine.
looking for |
How can I know what is So, please, include more information. Tell us about the data that you have, what you expect, and what you get. |
Note that the different APIs are doing VERY different things.
Do you see the difference? |
The main problem that query I should get |
I can reproduce this for external labels and have prepared a fix. I suppose up is not an external label though; for internal labels my quick test setup worked properly; can you explain your topology somewhat please so we have better idea where to look? Additionally; are you able to share a block, that would greatly help! |
https://github.com/thanos-io/thanos/pull/6879/files here the fix for external labels; but probably not related to this issue |
At least for sidecar and internal labels, my query setup produces the expected results:
but it might be a very specific setup ( from debugging another issue, i just had it around still ) |
I have complex topology:
My issue not for external labels |
Can you test the curl for the other queries too? Is it maybe related to the query frontend? |
tested. All queries affected |
Ok, thank you for verifying. Sice it works for me with sidecar and query just joins all returned labels, it must be related to bucket store i think. ( Which is weird; we have acceptance tests for this in principle ). |
I tested without store (only sidecar) and it works fine. |
Looks like it store issue :( |
Good stuff, thank you! |
Can you share information about the store instance, which version, which config, etc? |
I found an old v0.19.0 store that produce labels. Trying to fix this |
I can confirm issue with v0.19.0. With v0.32.5 all works. |
Thanks for confirming! I'll close the issue then! |
Thanks for help! |
Can we please reopen the issue. This issue is still present for me in v0.35.5 and I noticed it due to grafana update. grafana/grafana#78043 I expect to only get the label value "fsn1" but instead I get all regions.
{
"status":"success",
"data":["ash","fsn1","hel1","hil","nbg1"]
} Query Range with the same query and time range. I reduced the json output to keep it simple. There are some more values returned and some more nodes that all have similar labels labels like in the example below. Only the instance and exported_instance changes.
{
"status": "success",
"data": {
"resultType": "matrix",
"result": [
{
"metric": {
"__name__": "opensearch_cluster_status",
"cluster": "fsn1-dc14-opensearch1",
"exported_instance": "172.27.9.1:9200",
"exported_job": "opensearch",
"instance": "fsn1-dc14-opensearch1-x1.<someurl>",
"job": "internal",
"monitor": "opensearch",
"region": "fsn1"
},
"values": [
[
1701065520,
"0"
]
]
},
{
"metric": {
"__name__": "opensearch_cluster_status",
"cluster": "fsn1-dc14-opensearch1",
"exported_instance": "172.27.9.2:9200",
"exported_job": "opensearch",
"instance": "fsn1-dc14-opensearch1-x1.<someurl>",
"job": "internal",
"monitor": "opensearch",
"region": "fsn1"
},
"values": [
[
1701065520,
"0"
]
]
},
{
"metric": {
"__name__": "opensearch_cluster_status",
"cluster": "fsn1-dc14-opensearch1",
"exported_instance": "172.27.9.3:9200",
"exported_job": "opensearch",
"instance": "fsn1-dc14-opensearch1-x2.<someurl>",
"job": "internal",
"monitor": "opensearch",
"region": "fsn1"
},
"values": [
[
1701065520,
"0"
]
]
},
{
"metric": {
"__name__": "opensearch_cluster_status",
"cluster": "fsn1-dc14-opensearch1",
"exported_instance": "172.27.9.4:9200",
"exported_job": "opensearch",
"instance": "fsn1-dc14-opensearch1-x2.<someurl>",
"job": "internal",
"monitor": "opensearch",
"region": "fsn1"
},
"values": [
[
1701065520,
"0"
]
]
}
]
}
}
|
@Jakob3xD can you describe your setup please? |
@MichaHoffmann Following is a graphical version of our setup. |
I think i fixed the issue for external labels here https://github.com/thanos-io/thanos/pull/6879/files; your rulers are not upgraded and the issue was in sidecar and tsdb store ( rulers use tsdb store i think ); can you upgrade rulers and retry please? |
Oh wait that was never merged; let me ping on the PR |
Closing as dup of #6959 which was closed now. It was fixed by https://github.com/thanos-io/thanos/pull/6879/files. |
Hello. I encountered the problem described in #5469
Today we upgraded grafana and queries
label_values()
moved from/api/v1/series
to/api/v1/label/<label>/values
.Now we got all label values for all queries, matchers not wok. For example:
New api
Old api
Thanos: v0.32.5
Prometheus: v.2.45.1
The text was updated successfully, but these errors were encountered: