Skip to content
This repository has been archived by the owner on Jun 29, 2022. It is now read-only.

Nodes don't have a role #227

Open
iaguis opened this issue Mar 30, 2020 · 6 comments
Open

Nodes don't have a role #227

iaguis opened this issue Mar 30, 2020 · 6 comments
Labels
area/security Security related stuff area/ux User Experience bug Something isn't working

Comments

@iaguis
Copy link
Contributor

iaguis commented Mar 30, 2020

On a default Lokomotive installation nodes don't show any roles:

$ kubectl get nodes 
NAME                           STATUS   ROLES    AGE   VERSION
test-cluster-controller-0      Ready    <none>   9d    v1.17.4
test-cluster-pool-1-worker-0   Ready    <none>   9d    v1.17.4
test-cluster-pool-1-worker-1   Ready    <none>   9d    v1.17.4
test-cluster-pool-1-worker-2   Ready    <none>   9d    v1.17.4

We do set the label node.kubernetes.io/master= but not the node-role.kubernetes.io/master=.

We should perhaps set labels for all nodes, so users see something like this:

$ kubectl get nodes 
NAME                           STATUS   ROLES    AGE   VERSION
test-cluster-controller-0      Ready    master   9d    v1.17.4
test-cluster-pool-1-worker-0   Ready    worker   9d    v1.17.4
test-cluster-pool-1-worker-1   Ready    worker   9d    v1.17.4
test-cluster-pool-1-worker-2   Ready    worker   9d    v1.17.4

Reported-by: @jpetazzo

@iaguis iaguis added the area/ux User Experience label Mar 30, 2020
@invidian
Copy link
Member

Context from other issue about this problem:

Following a recent k8s update there have been changes to the label format used for labeling k8s nodes. For example, node-role.kubernetes.io/controller got changed to node.kubernetes.io/controller.

This is not entirely correct. node-role.kubernetes.io/X label still defines node's role, which you can see when running kubectl get nodes.

In previous versions K8s treated node-role.kubernetes.io/master label specially and applies special behavior while draining the nodes, which were master, however now they want to migrate away from it: kubernetes/enhancements#1143. See also kubernetes/kubernetes#80238 for the actual changes which has been implemented.

There is one more thing to consider. While kubelet has --node-labels flag, Node Authorization (which we currently don't use, but we should) does not allow Kubelet to set labels in node-role.kubernetes.io namespace to prevent rogue kubelet to obtain controlplane nodes, which will lead to compromising cluster.

kubeadm sets node-role.kubernetes.io/master label independently from the kubelet on controller Node objects after they are provisioned, so node-role.kubernetes.io/master label and taint is kind of canonical way of defining controller nodes.

We as distribution maintainer could just use different label for scheduling pods on the controller node and the only visible disadvantage will be, that kubectl get nodes won't be showing the roles nicely. However, as described above, that has the potential security impact.

For worker nodes, it's fine to not have the role assigned in my opinion, however, I believe it is important to keep this behavior for controller nodes, especially if we care about the security. This means perhaps we should label the controller nodes via API after they are registered to the cluster.

As far as I remember, using privileged kubeconfig on kubelet won't work, as the label is being checked client-side (by kubelet) and not server side.

Also currently, we unregister the node on kubelet restart, so we would have to re-apply the label again after each restart, not very convenient.

@surajssd
Copy link
Member

surajssd commented Apr 2, 2020

Adding patches like these will work:

kubectl patch node ip-10-0-21-180 -p '{"metadata":{"labels":{"node-role.kubernetes.io/worker":""}}}'

With this do we also intend to support adding custom node roles?

@iaguis iaguis added proposed/next-sprint Issues proposed for next sprint and removed proposed/next-sprint Issues proposed for next sprint labels Apr 8, 2020
@invidian invidian added area/security Security related stuff bug Something isn't working labels Jun 9, 2020
@knrt10
Copy link
Member

knrt10 commented Aug 10, 2020

Once we have webhook setup in lokomotive, we can easily do this using webhooks

@invidian
Copy link
Member

Once we have webhook setup in lokomotive, we can easily do this using webhooks

@knrt10 could you explain how would you implement it?

@knrt10
Copy link
Member

knrt10 commented Aug 11, 2020

I don't know for sure, but after bootstrapping, we could try patching it with webhook and add labels accordingly using API after they are registered to the cluster, just like we are going to do with the service account.

@invidian
Copy link
Member

invidian commented Aug 11, 2020

I don't know for sure, but after bootstrapping, we could try patching it with webhook and add labels accordingly using API after they are registered to the cluster, just like we are going to do with the service account.

Hm, I'm curious about the following details:

  • How do we differentiate between the roles? The whole point of role is to prevent the node to say that it's a controller node, so it doesn't attract pods with access to the secrets.
  • If we do that via webhook, would webhook unavailability prevent the nodes from joining the cluster?
  • Adding role must be done during cluster bootstrapping, so controlplane pods gets scheduled on the right nodes. That would mean, that we need to run the webhook as static controlplane, when we don't have kube-proxy yet. I wonder if it's possible to serve the webhook without the service object.

IMO for now, it should be sufficient if we drop privileged kubeconfig on controller nodes (as they have access to the secrets anyway) and we add a loop there, which will periodically label the node with the role. This way, we keep it robust and simple. Solving the problem with other node roles can be resolved later I think (unless we insist to have roles on worker nodes too).

ipochi added a commit to kinvolk/autoscaler that referenced this issue Aug 18, 2021
Currently the label to identify controller/master node is hard coded to
`node-role.kubernetes.io/master`.

There have been some conversations centered around replacing the label
with `node-role.kubernetes.io/control-plane`.

In [Lokomotive](github.com/kinvolk/lokomotive), the label to identify
the controller/master node is `node.kubernetes.io/master`, the reasons
for this is mentioned in this [issue](kinvolk/lokomotive#227)

This commit makes the label configurable by setting an env variable in
the deployment `CONTROLLER_NODE_IDENTIFIER_LABEL`, if set then the value
in the env variable is used for identifying controller/master nodes, if
not set/passed, then the existing behaviour is followed choosing the
existing label.

Signed-off-by: Imran Pochi <[email protected]>
ipochi added a commit to kinvolk/autoscaler that referenced this issue Aug 19, 2021
Currently the label to identify controller/master node is hard coded to
`node-role.kubernetes.io/master`.

There have been some conversations centered around replacing the label
with `node-role.kubernetes.io/control-plane`.

In [Lokomotive](github.com/kinvolk/lokomotive), the label to identify
the controller/master node is `node.kubernetes.io/master`, the reasons
for this is mentioned in this [issue](kinvolk/lokomotive#227)

This commit makes the label configurable by setting an env variable in
the deployment `CONTROLLER_NODE_IDENTIFIER_LABEL`, if set then the value
in the env variable is used for identifying controller/master nodes, if
not set/passed, then the existing behaviour is followed choosing the
existing label.

Signed-off-by: Imran Pochi <[email protected]>
ipochi added a commit to kinvolk/autoscaler that referenced this issue Aug 19, 2021
Currently the label to identify controller/master node is hard coded to
`node-role.kubernetes.io/master`.

There have been some conversations centered around replacing the label
with `node-role.kubernetes.io/control-plane`.

In [Lokomotive](github.com/kinvolk/lokomotive), the label to identify
the controller/master node is `node.kubernetes.io/master`, the reasons
for this is mentioned in this [issue](kinvolk/lokomotive#227)

This commit makes the label configurable by setting an env variable in
the deployment `CONTROLLER_NODE_IDENTIFIER_LABEL`, if set then the value
in the env variable is used for identifying controller/master nodes, if
not set/passed, then the existing behaviour is followed choosing the
existing label.

Signed-off-by: Imran Pochi <[email protected]>
ipochi added a commit to kinvolk/autoscaler that referenced this issue Aug 19, 2021
Currently the label to identify controller/master node is hard coded to
`node-role.kubernetes.io/master`.

There have been some conversations centered around replacing the label
with `node-role.kubernetes.io/control-plane`.

In [Lokomotive](github.com/kinvolk/lokomotive), the label to identify
the controller/master node is `node.kubernetes.io/master`, the reasons
for this is mentioned in this [issue](kinvolk/lokomotive#227)

This commit makes the label configurable by setting an env variable in
the deployment `CONTROLLER_NODE_IDENTIFIER_LABEL`, if set then the value
in the env variable is used for identifying controller/master nodes, if
not set/passed, then the existing behaviour is followed choosing the
existing label.

Signed-off-by: Imran Pochi <[email protected]>
piotrnosek pushed a commit to piotrnosek/autoscaler that referenced this issue Nov 30, 2021
Currently the label to identify controller/master node is hard coded to
`node-role.kubernetes.io/master`.

There have been some conversations centered around replacing the label
with `node-role.kubernetes.io/control-plane`.

In [Lokomotive](github.com/kinvolk/lokomotive), the label to identify
the controller/master node is `node.kubernetes.io/master`, the reasons
for this is mentioned in this [issue](kinvolk/lokomotive#227)

This commit makes the label configurable by setting an env variable in
the deployment `CONTROLLER_NODE_IDENTIFIER_LABEL`, if set then the value
in the env variable is used for identifying controller/master nodes, if
not set/passed, then the existing behaviour is followed choosing the
existing label.

Signed-off-by: Imran Pochi <[email protected]>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
area/security Security related stuff area/ux User Experience bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants