-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
After removing the relation with scrape-inteval-config old alert rules are still in Prometheus #550
Comments
This might be related to canonical/grafana-agent-operator#29 |
I see a similar issue with prometheus-scrape-config-k8s - removing the relation between that and prometheus does not update the alert rules. The _configure method that checks and runs _set_rules is never run. This could be a bug specific to the receive-remote-write interface. |
This will probably be fixed by canonical/operator#1091, we should check after that is merged to see if we can reproduce it or not. |
Hi @przemeklal Seems that canonical/operator#1091 fixed our issue here. Please let me know if the steps I've followed to reproduce are ok. I have deployed the following two models in k8s and lxd: And I can check that the alert rules are in prometheus: After that I removed the relation between cos-proxy and scrape-interval-config: ╭─ubuntu@charm-dev-juju-31 ~/repos [lxd:apps]
╰─$ juju remove-relation cos-proxy scrape-interval-config and I verify that alert rules are no longer in prometheus: Bundles usedbundle: kubernetes
saas:
remote-cda7ba05eeef4f09890fa77c7fe82347: {}
applications:
prom:
charm: prometheus-k8s
channel: edge
revision: 170
resources:
prometheus-image: 139
scale: 1
constraints: arch=amd64
storage:
database: kubernetes,1,1024M
trust: true
scrape-interval-config:
charm: prometheus-scrape-config-k8s
channel: edge
revision: 47
scale: 1
constraints: arch=amd64
relations:
- - prom:metrics-endpoint
- scrape-interval-config:metrics-endpoint
- - scrape-interval-config:configurable-scrape-jobs
- remote-cda7ba05eeef4f09890fa77c7fe82347:downstream-prometheus-scrape
--- # overlay.yaml
applications:
scrape-interval-config:
offers:
scrape-interval-config:
endpoints:
- configurable-scrape-jobs
acl:
admin: admin series: jammy
saas:
scrape-interval-config:
url: microk8s:admin/cos.scrape-interval-config
applications:
cos-proxy:
charm: cos-proxy
channel: edge
revision: 64
num_units: 1
to:
- "1"
constraints: arch=amd64
nrpe:
charm: nrpe
channel: edge
revision: 117
ubuntu:
charm: ubuntu
channel: stable
revision: 24
series: focal
num_units: 1
to:
- "0"
constraints: arch=amd64
storage:
block: loop,100M
files: rootfs,100M
machines:
"0":
constraints: arch=amd64
series: focal
"1":
constraints: arch=amd64
relations:
- - ubuntu:juju-info
- nrpe:general-info
- - nrpe:monitors
- cos-proxy:monitors
- - cos-proxy:downstream-prometheus-scrape
- scrape-interval-config:configurable-scrape-jobs |
@przemeklal since we are not able to reproduce it now, we close it. Let us know if the bug re-appears |
Bug Description
After removing the relation between scrape-interval-config and Prometheus, I can still see the old alert rules in Prometheus, specifically the alert rules coming from cos-proxy.
To Reproduce
Environment
latest/edge version of COS components as of now, nrpe latest/stable
Relevant log output
Nothing relevant in the logs.
Additional context
No response
The text was updated successfully, but these errors were encountered: