-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add az-id label to nodes #3878
Comments
We have an AZ Label on the node already, but as I understand it you want it in the form of |
Yup, there are a lot of use cases where you might need to get the real physical location for pricing/network/etc. |
Related: kubernetes/cloud-provider-aws#300 |
Would be great if we can just rely on the linked issue above to do this for us. I'm sure we could do this in Karpenter, but if it's going to be implemented in the future, it just means that the label will be redundant in the future. |
Outside Kubernetes itself, it'd be handy to have AWS define what the label key is going to be. We can get that defined and agreed even before the code to set it is written. |
Does anybody know any "easy" workaround for this? E.g. getting AZ ID in UserData and passing it to kubelet somehow? |
I think you'd have to do it in a post node join controller or daemon set. Karpenter doesn't expose the bootstrap.sh config which is why you can't set the labels during that step, which was the long standing way you'd set runtime labels on startup as far as I know. |
Tell us about your request
Can Karpenter Add an az-id label to nodes?
This currently cannot be done as users need to set the labels using a startup script and/or modify the bootstrap flag for node labels.
I am not sure if the inability to modify bootstrap command flags is changing with the new machine ownership changes in 0.28
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
Our cost team needs the az-id label for pricing. I could see a lot of other uses too.
Are you currently working around this issue?
We're not with Karpenter. An alternative is to add some kind of daemonset or controller which adds this label to nodes after join. This exposes some limitations in Prometheus metrics which can cause issues with multiple time series when you modify the labels on that series after creation.
Additional Context
No response
Attachments
No response
Community Note
The text was updated successfully, but these errors were encountered: