-
Notifications
You must be signed in to change notification settings - Fork 49
Nodes don't have a role #227
Comments
Context from other issue about this problem:
This is not entirely correct. In previous versions K8s treated There is one more thing to consider. While kubelet has kubeadm sets We as distribution maintainer could just use different label for scheduling pods on the controller node and the only visible disadvantage will be, that For worker nodes, it's fine to not have the role assigned in my opinion, however, I believe it is important to keep this behavior for controller nodes, especially if we care about the security. This means perhaps we should label the controller nodes via API after they are registered to the cluster. As far as I remember, using privileged kubeconfig on kubelet won't work, as the label is being checked client-side (by kubelet) and not server side. Also currently, we unregister the node on |
Adding patches like these will work:
With this do we also intend to support adding custom node roles? |
Once we have webhook setup in lokomotive, we can easily do this using webhooks |
@knrt10 could you explain how would you implement it? |
I don't know for sure, but after bootstrapping, we could try patching it with webhook and add labels accordingly using API after they are registered to the cluster, just like we are going to do with the service account. |
Hm, I'm curious about the following details:
IMO for now, it should be sufficient if we drop privileged |
Currently the label to identify controller/master node is hard coded to `node-role.kubernetes.io/master`. There have been some conversations centered around replacing the label with `node-role.kubernetes.io/control-plane`. In [Lokomotive](github.com/kinvolk/lokomotive), the label to identify the controller/master node is `node.kubernetes.io/master`, the reasons for this is mentioned in this [issue](kinvolk/lokomotive#227) This commit makes the label configurable by setting an env variable in the deployment `CONTROLLER_NODE_IDENTIFIER_LABEL`, if set then the value in the env variable is used for identifying controller/master nodes, if not set/passed, then the existing behaviour is followed choosing the existing label. Signed-off-by: Imran Pochi <[email protected]>
Currently the label to identify controller/master node is hard coded to `node-role.kubernetes.io/master`. There have been some conversations centered around replacing the label with `node-role.kubernetes.io/control-plane`. In [Lokomotive](github.com/kinvolk/lokomotive), the label to identify the controller/master node is `node.kubernetes.io/master`, the reasons for this is mentioned in this [issue](kinvolk/lokomotive#227) This commit makes the label configurable by setting an env variable in the deployment `CONTROLLER_NODE_IDENTIFIER_LABEL`, if set then the value in the env variable is used for identifying controller/master nodes, if not set/passed, then the existing behaviour is followed choosing the existing label. Signed-off-by: Imran Pochi <[email protected]>
Currently the label to identify controller/master node is hard coded to `node-role.kubernetes.io/master`. There have been some conversations centered around replacing the label with `node-role.kubernetes.io/control-plane`. In [Lokomotive](github.com/kinvolk/lokomotive), the label to identify the controller/master node is `node.kubernetes.io/master`, the reasons for this is mentioned in this [issue](kinvolk/lokomotive#227) This commit makes the label configurable by setting an env variable in the deployment `CONTROLLER_NODE_IDENTIFIER_LABEL`, if set then the value in the env variable is used for identifying controller/master nodes, if not set/passed, then the existing behaviour is followed choosing the existing label. Signed-off-by: Imran Pochi <[email protected]>
Currently the label to identify controller/master node is hard coded to `node-role.kubernetes.io/master`. There have been some conversations centered around replacing the label with `node-role.kubernetes.io/control-plane`. In [Lokomotive](github.com/kinvolk/lokomotive), the label to identify the controller/master node is `node.kubernetes.io/master`, the reasons for this is mentioned in this [issue](kinvolk/lokomotive#227) This commit makes the label configurable by setting an env variable in the deployment `CONTROLLER_NODE_IDENTIFIER_LABEL`, if set then the value in the env variable is used for identifying controller/master nodes, if not set/passed, then the existing behaviour is followed choosing the existing label. Signed-off-by: Imran Pochi <[email protected]>
Currently the label to identify controller/master node is hard coded to `node-role.kubernetes.io/master`. There have been some conversations centered around replacing the label with `node-role.kubernetes.io/control-plane`. In [Lokomotive](github.com/kinvolk/lokomotive), the label to identify the controller/master node is `node.kubernetes.io/master`, the reasons for this is mentioned in this [issue](kinvolk/lokomotive#227) This commit makes the label configurable by setting an env variable in the deployment `CONTROLLER_NODE_IDENTIFIER_LABEL`, if set then the value in the env variable is used for identifying controller/master nodes, if not set/passed, then the existing behaviour is followed choosing the existing label. Signed-off-by: Imran Pochi <[email protected]>
On a default Lokomotive installation nodes don't show any roles:
We do set the label
node.kubernetes.io/master=
but not thenode-role.kubernetes.io/master=
.We should perhaps set labels for all nodes, so users see something like this:
Reported-by: @jpetazzo
The text was updated successfully, but these errors were encountered: