Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add InhibitionClusterWithoutWorkerNodes for CAPA #1397

Merged
merged 2 commits into from
Oct 22, 2024

Conversation

hervenicol
Copy link
Contributor

Towards: https://github.com/giantswarm/giantswarm/issues/31390

This PR adds an inhibition for clusters that have no worker nodes.

Checklist

@hervenicol hervenicol self-assigned this Oct 21, 2024
@hervenicol hervenicol requested review from a team as code owners October 21, 2024 16:24
@hervenicol hervenicol force-pushed the capa-inhibit-noworker-clusters branch from 08fe221 to 3a24c8c Compare October 21, 2024 16:41
annotations:
description: '{{`Cluster ({{ $labels.cluster_id }}) has no worker nodes.`}}'
expr: |-
label_replace(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this for capa only? otherwise it looks good

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would think that with capi_machinepool_spec_replicas and capi_machinedeployment_spec_replicas we should be good?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense :)

"(.*)"
) == 1
unless on (cluster_id) (
sum(capi_machinepool_spec_replicas{} > 0) by (cluster_id)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we accept 1 worker nodes only?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Prometheus-agent/Alloy could run on a 1-node WC, so that's potentially ok.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we sure we want to use capi_machinepool_spec_replicas? That's the number of replicas defined in the MachinePool spec, but that doesn't necessarily represent the current number of nodes. For the inhibition, I thought we would prefer to use the current number of replicas. There are other metrics like

  • capi_machinepool_status_replicas: Replicas is the most recently observed number of replicas.
  • capi_machinepool_status_replicas_ready: The number of ready replicas for this MachinePool. A machine is considered ready when the node has been created and is "Ready".
  • capi_machinepool_status_replicas_available: The number of available replicas (ready for at least minReadySeconds) for this MachinePool.

Do you think it would make sense to use one of those instead?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Current inhibition works when the cluster has been purposely scaled down.

If the cluster should have nodes but none is ready/available, I think we should get a page.
I don't think current state of CAPI monitoring manages it, so I'd rather have a "prometheus-agent down" alert than no alert in this case.

It seems to me that there's quite a gap between vintage AWS and CAPA alerts, but I don't think it's Atlas responsibility to fix it. So I went the quickest way to solving my actual issue 😅

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, I thought you wanted the inhibition to avoid paging when the cluster was having other issues i.e. no ready nodes, meaning there was nothing wrong with your component.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would be ideal. But my first expectation is to not get paged when a cluster has no issues, for now 🤣

@hervenicol hervenicol merged commit c3d9f2a into main Oct 22, 2024
7 checks passed
@hervenicol hervenicol deleted the capa-inhibit-noworker-clusters branch October 22, 2024 07:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants