Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

1.18 joining nodes need RBAC that is missing in a 1.17 cluster #2079

Closed
neolit123 opened this issue Mar 26, 2020 · 1 comment · Fixed by kubernetes/website#19868
Closed
Assignees
Labels
area/security area/upgrades kind/bug Categorizes issue or PR as related to a bug. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now.
Milestone

Comments

@neolit123
Copy link
Member

neolit123 commented Mar 26, 2020

workaround

call kubeadm init phase bootstrap-token before joining 1.18 nodes.

or see the alternative here:
https://github.com/kubernetes/website/pull/19868/files


reported here:
kubernetes/kubernetes#89501

cc @kvaps @ajs11174

What happened:
After upgrading my cluster to 1.18 I cannot join a new worker node.

What you expected to happen:
Join a worker node after upgrading kubeadm/kubectl/kubelet to version 1.18

How to reproduce it (as minimally and precisely as possible):
sudo kubeadm join --config /etc/init/config.yaml --v=5
(I also tried joining with the default kubeadm token generate --print-join-command output)

Anything else we need to know?:
See the logs below. It appears to be an authentication issue. Specifically:

nodes "ip-10-0-64-22.ec2.internal" is forbidden: User "system:bootstrap:m8xqjy" cannot get resource "nodes" in API group "" at the cluster scope
cannot get Node "ip-10-0-64-22.ec2.internal"

Environment:

  • Kubernetes version (use kubectl version): v1.18
  • Cloud provider or hardware configuration: aws
  • OS (e.g: cat /etc/os-release): ubuntu 18.04
  • Install tools: kubeadm

Logs:

sudo kubeadm join --config /etc/init/config.yaml --v=5
W0326 04:04:09.799639    8977 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
I0326 04:04:09.799713    8977 join.go:371] [preflight] found NodeName empty; using OS hostname as NodeName
I0326 04:04:09.799736    8977 joinconfiguration.go:74] loading configuration from "/etc/init/config.yaml"
I0326 04:04:09.800977    8977 initconfiguration.go:103] detected and using CRI socket: /var/run/dockershim.sock
[preflight] Running pre-flight checks
I0326 04:04:09.801072    8977 preflight.go:90] [preflight] Running general checks
I0326 04:04:09.801122    8977 checks.go:249] validating the existence and emptiness of directory /etc/kubernetes/manifests
I0326 04:04:09.801179    8977 checks.go:286] validating the existence of file /etc/kubernetes/kubelet.conf
I0326 04:04:09.801197    8977 checks.go:286] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf
I0326 04:04:09.801211    8977 checks.go:102] validating the container runtime
I0326 04:04:09.933557    8977 checks.go:128] validating if the service is enabled and active
I0326 04:04:10.136610    8977 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0326 04:04:10.136681    8977 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0326 04:04:10.136722    8977 checks.go:649] validating whether swap is enabled or not
I0326 04:04:10.136761    8977 checks.go:376] validating the presence of executable conntrack
I0326 04:04:10.136800    8977 checks.go:376] validating the presence of executable ip
I0326 04:04:10.136830    8977 checks.go:376] validating the presence of executable iptables
I0326 04:04:10.136857    8977 checks.go:376] validating the presence of executable mount
I0326 04:04:10.136892    8977 checks.go:376] validating the presence of executable nsenter
I0326 04:04:10.136916    8977 checks.go:376] validating the presence of executable ebtables
I0326 04:04:10.136941    8977 checks.go:376] validating the presence of executable ethtool
I0326 04:04:10.136966    8977 checks.go:376] validating the presence of executable socat
I0326 04:04:10.136992    8977 checks.go:376] validating the presence of executable tc
I0326 04:04:10.137018    8977 checks.go:376] validating the presence of executable touch
I0326 04:04:10.137048    8977 checks.go:520] running all checks
I0326 04:04:10.265981    8977 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I0326 04:04:10.267169    8977 checks.go:618] validating kubelet version
I0326 04:04:10.332680    8977 checks.go:128] validating if the service is enabled and active
I0326 04:04:10.345104    8977 checks.go:201] validating availability of port 10250
I0326 04:04:10.345360    8977 checks.go:286] validating the existence of file /etc/kubernetes/pki/ca.crt
I0326 04:04:10.345385    8977 checks.go:432] validating if the connectivity type is via proxy or direct
I0326 04:04:10.345425    8977 join.go:441] [preflight] Discovering cluster-info
I0326 04:04:10.345463    8977 token.go:78] [discovery] Created cluster-info discovery client, requesting info from "10.0.64.20:6443"
I0326 04:04:10.356595    8977 token.go:103] [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "10.0.64.20:6443"
I0326 04:04:10.356619    8977 discovery.go:51] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process
I0326 04:04:10.356631    8977 join.go:455] [preflight] Fetching init configuration
I0326 04:04:10.356641    8977 join.go:493] [preflight] Retrieving KubeConfig objects
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
I0326 04:04:10.371734    8977 interface.go:400] Looking for default routes with IPv4 addresses
I0326 04:04:10.371756    8977 interface.go:405] Default route transits interface "ens5"
I0326 04:04:10.371874    8977 interface.go:208] Interface ens5 is up
I0326 04:04:10.371932    8977 interface.go:256] Interface "ens5" has 2 addresses :[10.0.64.22/18 fe80::101c:a6ff:fe99:3c6f/64].
I0326 04:04:10.371959    8977 interface.go:223] Checking addr  10.0.64.22/18.
I0326 04:04:10.371972    8977 interface.go:230] IP found 10.0.64.22
I0326 04:04:10.371985    8977 interface.go:262] Found valid IPv4 address 10.0.64.22 for interface "ens5".
I0326 04:04:10.371995    8977 interface.go:411] Found active IP 10.0.64.22 
I0326 04:04:10.372053    8977 preflight.go:101] [preflight] Running configuration dependant checks
I0326 04:04:10.372067    8977 controlplaneprepare.go:211] [download-certs] Skipping certs download
I0326 04:04:10.372083    8977 kubelet.go:111] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf
I0326 04:04:10.372994    8977 kubelet.go:119] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt
I0326 04:04:10.374347    8977 kubelet.go:145] [kubelet-start] Checking for an existing Node in the cluster with name "ip-10-0-64-22.ec2.internal" and status "Ready"
nodes "ip-10-0-64-22.ec2.internal" is forbidden: User "system:bootstrap:m8xqjy" cannot get resource "nodes" in API group "" at the cluster scope
cannot get Node "ip-10-0-64-22.ec2.internal"
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/join.runKubeletStartJoinPhase
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/join/kubelet.go:148
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:422
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdJoin.func1
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/join.go:170
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
        _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:203
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1357
error execution phase kubelet-start
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:422
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdJoin.func1
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/join.go:170
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
        _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:203
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1357
@neolit123 neolit123 added area/security area/upgrades kind/bug Categorizes issue or PR as related to a bug. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. labels Mar 26, 2020
@neolit123 neolit123 added this to the v1.19 milestone Mar 26, 2020
@neolit123 neolit123 self-assigned this Mar 26, 2020
@neolit123
Copy link
Member Author

PR with the fix is here:
kubernetes/kubernetes#89537

but this also needs a mention in the docs about immutable upgrades, where the workaround (see OP) is still required.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/security area/upgrades kind/bug Categorizes issue or PR as related to a bug. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant