Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[occm] Support Octavia/Amphora Prometheus endpoint creation using annotations #2465

Open
antonin-a opened this issue Nov 3, 2023 · 12 comments · May be fixed by #2633
Open

[occm] Support Octavia/Amphora Prometheus endpoint creation using annotations #2465

antonin-a opened this issue Nov 3, 2023 · 12 comments · May be fixed by #2633

Comments

@antonin-a
Copy link

antonin-a commented Nov 3, 2023

Component: openstack-cloud-controller-manager (occm)

FEATURE REQUEST?:

/kind feature

As a Kubernetes + occm user I would like to be able to create Prometheus endpoint (a listener with a special protocol "PROMETHEUS") so that I can easily monitor my Octavia Load Balancers using Prometheus.

What happened:
Currently the only way to do so is to use Openstack CLI / APIs
openstack loadbalancer listener create --name stats-listener --protocol PROMETHEUS --protocol-port 9100 --allowed-cidr 10.0.0.0/8 $os_octavia_id

What you expected to happen:
Create the Prometheus endpoint using annotations at Loadbalancer creation (Kubernetes service type LoadBalancer)

Annotations that we suggest to add:

kind: Service
metadata:
  name: octavia-metrics
  annotations:
    loadbalancer.openstack.org/metrics-enable: "true"
    loadbalancer.openstack.org/metrics-port: "9100"
    loadbalancer.openstack.org/metrics-allow-cidrs: "10.0.0.0/8, fe80::/10"
    loadbalancer.openstack.org/vip-address: "10.4.2.3" #  Auto-computed field based on Octavia VIP as it is required for Prometheus configuration or any other solution (currently it is not possible to retrieve private IP of public LBs) 
  labels:
    app: test-octavia
spec:
  ports:
  - name: client
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
  type: LoadBalancer 

Anything else we need to know?:
Related Octavia documentation:
https://docs.openstack.org/octavia/latest/user/guides/monitoring.html#monitoring-with-prometheus

As an Openstack Public Cloud Provider we are currently working on a custom CCM implementation, for this reason we can potentially do the PR associated with this request, but we'd like to at least validate the implementation before starting developments.

@dulek
Copy link
Contributor

dulek commented Nov 3, 2023

I see this as a valid feature request. I think I'd rather skip metrics-enable annotation and assume that if metrics-port is set, we should enable the metrics listener. What I don't like here is exposing the VIP address to the end user. I guess using FIP to reach the metrics doesn't work due to security concerns?

@Lucasgranet
Copy link
Contributor

Lucasgranet commented Nov 3, 2023

Hello @dulek,

Most of the time, your Prometheus scrapper will be deployed in your K8S cluster.
If you're scraping from a node (of the cluster), your request will go through the router to reach the FIP. If so, you will need to add the (Openstack or not) router egress IP in the Prometheus Listener's allowed-ip list to allow the client.

IMO, it's better for an integration in a K8S cluster.

Lucas,

@dulek
Copy link
Contributor

dulek commented Nov 6, 2023

@Lucasgranet: Fair enough, I guess this is the only way forward then.

@jichenjc, do you think exposing LB VIP IP on the Service might potentially be dangerous?

@antonin-a
Copy link
Author

Hello @dulek , any update on this one ?

@dulek
Copy link
Contributor

dulek commented Nov 28, 2023

Hello @dulek , any update on this one ?

I've asked @jichenjc for an opinion in my previous comment. @zetaab might have something to say too.

All the being said I don't have free cycles to work on this, as it's not a use case for us. We'll definitely welcome a contribution from your side.

@jichenjc
Copy link
Contributor

do you think exposing LB VIP IP on the Service might potentially be dangerous?

loadbalancer.openstack.org/vip-address: "10.4.2.3" # Auto-computed field based on Octavia VIP as it is required for Prometheus configuration or any other solution (currently it is not possible to retrieve private IP of public LBs)

sorry saw this just now , I am not security expert but seems no harm as anyway we have to provide the LB info for some connections ? but for normal app user (the one who create the service) need to understand the detail of LB underhood which in other service creation template I seems didn't see that before

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 27, 2024
@kbudde
Copy link
Contributor

kbudde commented Feb 27, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 27, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 27, 2024
@antonin-a
Copy link
Author

/remove-lifecycle stale

We will work on it

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 27, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 25, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Sep 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
7 participants