Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for adding magnum.openstack.org/role role to nodes. #164

Closed
mnaser opened this issue Jul 14, 2023 · 7 comments · Fixed by #167
Closed

Add support for adding magnum.openstack.org/role role to nodes. #164

mnaser opened this issue Jul 14, 2023 · 7 comments · Fixed by #167
Assignees

Comments

@mnaser
Copy link
Member

mnaser commented Jul 14, 2023

More information can be found here: https://docs.openstack.org/magnum/latest/user/index.html#roles

We should figure out the best way to attach this, if it's not available out of the box it might be good to know/see what labels that Cluster API adds by default.

@okozachenko1203
Copy link
Member

okozachenko1203 commented Jul 14, 2023

There is neither a special label nor annotation.
master node

  annotations:
    cluster.x-k8s.io/cluster-name: kube-carvi
    cluster.x-k8s.io/cluster-namespace: magnum-system
    cluster.x-k8s.io/machine: kube-carvi-9tjnd-vvtlc
    cluster.x-k8s.io/owner-kind: KubeadmControlPlane
    cluster.x-k8s.io/owner-name: kube-carvi-9tjnd
    csi.volume.kubernetes.io/nodeid: '{"cinder.csi.openstack.org":"1d08fcd0-a2bb-4b62-b237-c5409e8ed691","manila.csi.openstack.org":"kube-carvi-control-plane-dxsqm-kvzvm","nfs.csi.k8s.io":"kube-carvi-control-plane-dxsqm-kv>
    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock
    node.alpha.kubernetes.io/ttl: "0"
    projectcalico.org/IPv4Address: 10.0.0.127/24
    projectcalico.org/IPv4IPIPTunnelAddr: 10.100.93.192
    volumes.kubernetes.io/controller-managed-attach-detach: "true"
  labels:
    beta.kubernetes.io/arch: amd64
    beta.kubernetes.io/instance-type: m1.medium
    beta.kubernetes.io/os: linux
    failure-domain.beta.kubernetes.io/region: RegionOne
    failure-domain.beta.kubernetes.io/zone: nova
    kubernetes.io/arch: amd64
    kubernetes.io/hostname: kube-carvi-control-plane-dxsqm-kvzvm
    kubernetes.io/os: linux
    node-role.kubernetes.io/control-plane: ""
    node.kubernetes.io/exclude-from-external-load-balancers: ""
    node.kubernetes.io/instance-type: m1.medium
    topology.cinder.csi.openstack.org/zone: nova
    topology.kubernetes.io/region: RegionOne
    topology.kubernetes.io/zone: nova

worker node (node group without any specific role)

  annotations:
    cluster.x-k8s.io/cluster-name: kube-carvi
    cluster.x-k8s.io/cluster-namespace: magnum-system
    cluster.x-k8s.io/machine: kube-carvi-default-worker-wrjbf-5b8588cb4f-tw765
    cluster.x-k8s.io/owner-kind: MachineSet
    cluster.x-k8s.io/owner-name: kube-carvi-default-worker-wrjbf-5b8588cb4f
    csi.volume.kubernetes.io/nodeid: '{"cinder.csi.openstack.org":"3c648120-e4c0-4fc0-b974-c4658e437b01","manila.csi.openstack.org":"kube-carvi-default-worker-infra-t2f8r-kjl88","nfs.csi.k8s.io":"kube-carvi-default-worker->
    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock
    node.alpha.kubernetes.io/ttl: "0"
    projectcalico.org/IPv4Address: 10.0.0.182/24
    projectcalico.org/IPv4IPIPTunnelAddr: 10.100.161.128
    volumes.kubernetes.io/controller-managed-attach-detach: "true"

  labels:
    beta.kubernetes.io/arch: amd64
    beta.kubernetes.io/instance-type: m1.medium
    beta.kubernetes.io/os: linux
    failure-domain.beta.kubernetes.io/region: RegionOne
    failure-domain.beta.kubernetes.io/zone: nova
    kubernetes.io/arch: amd64
    kubernetes.io/hostname: kube-carvi-default-worker-infra-t2f8r-kjl88
    kubernetes.io/os: linux
    node.kubernetes.io/instance-type: m1.medium
    topology.cinder.csi.openstack.org/zone: nova
    topology.kubernetes.io/region: RegionOne
    topology.kubernetes.io/zone: nova

@okozachenko1203
Copy link
Member

$ o coe nodegroup list 4e1910e1-35d1-4d99-913d-2ee5765156b2
+--------------------------------------+----------------+-----------+--------------------------------------+------------+-----------------+--------+
| uuid                                 | name           | flavor_id | image_id                             | node_count | status          | role   |
+--------------------------------------+----------------+-----------+--------------------------------------+------------+-----------------+--------+
| 81d69894-8696-4802-9ce9-04608582680d | default-master | m1.medium | ef107f29-8f26-474e-8f5f-80d269c7d2cd |          1 | CREATE_COMPLETE | master |
| c61acc4b-41f6-4ea9-834d-e0e04914a96b | default-worker | m1.medium | ef107f29-8f26-474e-8f5f-80d269c7d2cd |          1 | UPDATE_COMPLETE | worker |
+--------------------------------------+----------------+-----------+--------------------------------------+------------+-----------------+--------+
$ kubectl get nodes -L magnum.openstack.org/role
NAME                                          STATUS   ROLES           AGE   VERSION   ROLE
kube-carvi-control-plane-dxsqm-kvzvm          Ready    control-plane   21d   v1.25.3   
kube-carvi-default-worker-infra-t2f8r-kjl88   Ready    <none>          21d   v1.25.3   

@okozachenko1203
Copy link
Member

okozachenko1203 commented Jul 14, 2023

@okozachenko1203
Copy link
Member

okozachenko1203 commented Jul 14, 2023

Top-level labels that meet a specific cretria are propagated to the Node labels and top-level annotatation are not propagated.

.labels.[label-meets-criteria]=> Node.labels.annotations => Not propagated.
Label should meet one of the following criterias to propagate to Node:

Has node-role.kubernetes.io as prefix.
Belongs to node-restriction.kubernetes.io domain.
Belongs to node.cluster.x-k8s.io domain.

So we cannot use CAPI metadata propogation for magnum.openstack.org labels.

kubernetes.io/hostname
kubernetes.io/instance-type
kubernetes.io/os
kubernetes.io/arch

beta.kubernetes.io/instance-type
beta.kubernetes.io/os
beta.kubernetes.io/arch

failure-domain.beta.kubernetes.io/zone
failure-domain.beta.kubernetes.io/region

failure-domain.kubernetes.io/zone
failure-domain.kubernetes.io/region

[*.]kubelet.kubernetes.io/*
[*.]node.kubernetes.io/*

i.e. magnum.openstack.org is discouraged (because of security perspective) but it is possible at least so we can try using kubeletExtraArgs.node-labels in KubeadmConfigTemplate.
Other concern is this workaround only applies labels at the cluster creation but i don't think we have cases to change nodegroup roles on fly. So it is ok.

Another option is to request node label names change to Magnum upstream, or use different labels in mcapi project and add in the doc?
@mnaser what is your opinion?

@mnaser
Copy link
Member Author

mnaser commented Jul 17, 2023

@okozachenko1203 I think we can diverge from the Magnum role and instead use the native Kubernetes one, so I would like for us to propose the following:

  • Use propagation to set node-role.kubernetes.io/NODEGROUPNAME=""

This is much more clean and native way of doing it than what Magnum was doing, we can document this in our documentation as well. It will have the added useful feature of being able to see the role when doing kubectl get nodes too :)

and it's very easy :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants