Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubeadm init --dry-run --upload cert fails in upload-certs phase with 'secret not found' #2649

Closed
sj98ta opened this issue Feb 3, 2022 · 5 comments · Fixed by kubernetes/kubernetes#108002
Labels
area/dry-run help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@sj98ta
Copy link

sj98ta commented Feb 3, 2022

What happened?

Executing the command kubeadm init --dry-run --upload-certs, the command fails with

[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[dryrun] Would perform action GET on resource "secrets" in API group "core/v1"
[dryrun] Resource name: "bootstrap-token-xtr0u0"
[dryrun] Would perform action CREATE on resource "secrets" in API group "core/v1"
[dryrun] Attached object:
	apiVersion: v1
	data:
	  description: UHJveHkgZm9yIG1hbmFnaW5nIFRUTCBmb3IgdGhlIGt1YmVhZG0tY2VydHMgc2VjcmV0
	  expiration: MjAyMi0wMi0wM1QxNToxNToxMlo=
	  token-id: eHRyMHUw
	  token-secret: NWUxYWNobmdvNWRnMWFvMQ==
	kind: Secret
	metadata:
	  creationTimestamp: null
	  name: bootstrap-token-xtr0u0
	  namespace: kube-system
	type: bootstrap.kubernetes.io/token
[dryrun] Would perform action GET on resource "secrets" in API group "core/v1"
[dryrun] Resource name: "bootstrap-token-xtr0u0"
error execution phase upload-certs: error uploading certs: error to get token reference: secrets "secret not found" not found

What did you expect to happen?

I would expect the dry run with upload-certs to have succeeded.

How can we reproduce it (as minimally and precisely as possible)?

Execute the command kubeadm init --dry-run --upload-certs

Anything else we need to know?

No response

Kubernetes version

$ kubectl version
1.22.5

Cloud provider

None

OS version

# On Linux:
$ cat /etc/os-release
NAME=<redacted name>
VERSION=0.0.0
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.3 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
$ uname -a
Linux <redacted hostname> 5.4.0-90-generic kubernetes/kubernetes#101-Ubuntu SMP Fri Oct 15 19:59:45 UTC 2021 s390x s390x s390x GNU/Linux

# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here

Install tools

Container runtime (CRI) and and version (if applicable)

Related plugins (CNI, CSI, ...) and versions (if applicable)

@sj98ta sj98ta added the kind/bug Categorizes issue or PR as related to a bug. label Feb 3, 2022
@k8s-ci-robot
Copy link
Contributor

@sj98ta: There are no sig labels on this issue. Please add an appropriate label by using one of the following commands:

  • /sig <group-name>
  • /wg <group-name>
  • /committee <group-name>

Please see the group list for a listing of the SIGs, working groups, and committees available.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Feb 3, 2022
@k8s-ci-robot
Copy link
Contributor

@sj98ta: This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Feb 3, 2022
@neolit123
Copy link
Member

/transfer kubeadm

@k8s-ci-robot k8s-ci-robot transferred this issue from kubernetes/kubernetes Feb 3, 2022
@neolit123
Copy link
Member

neolit123 commented Feb 3, 2022

i haven't tested personally dry-run with upload-certs, but this is likely a matter of hooking the right dry-run client and make sure it has secrets...alternatively we can just exclude the API call to GET secrets when dry-running...

can you please run this command with --v=5 and show the full output?

@neolit123 neolit123 added area/dry-run help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. priority/backlog Higher priority than priority/awaiting-more-evidence. triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. triage/accepted Indicates an issue or PR is ready to be actively worked on. labels Feb 3, 2022
@neolit123 neolit123 added this to the v1.24 milestone Feb 3, 2022
@sj98ta
Copy link
Author

sj98ta commented Feb 3, 2022

Sure... Here you go.

I0203 18:45:58.580422  150721 initconfiguration.go:116] detected and using CRI socket: /var/run/dockershim.sock
I0203 18:45:58.582693  150721 interface.go:431] Looking for default routes with IPv4 addresses
I0203 18:45:58.582745  150721 interface.go:436] Default route transits interface "enc0"
I0203 18:45:58.584105  150721 interface.go:208] Interface enc0 is up
I0203 18:45:58.584209  150721 interface.go:256] Interface "enc0" has 1 addresses :[<REDACTED>/32].
I0203 18:45:58.584271  150721 interface.go:223] Checking addr  <REDACTED>/32.
I0203 18:45:58.584287  150721 interface.go:230] IP found <REDACTED>
I0203 18:45:58.584302  150721 interface.go:262] Found valid IPv4 address <REDACTED> for interface "enc0".
I0203 18:45:58.584318  150721 interface.go:442] Found active IP <REDACTED> 
I0203 18:45:58.584562  150721 kubelet.go:203] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd"
I0203 18:45:58.599949  150721 version.go:186] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable-1.txt
I0203 18:45:59.083549  150721 version.go:255] remote version is much newer: v1.23.3; falling back to: stable-1.22
I0203 18:45:59.083692  150721 version.go:186] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable-1.22.txt
[init] Using Kubernetes version: v1.22.6
[preflight] Running pre-flight checks
I0203 18:45:59.507367  150721 checks.go:577] validating Kubernetes and kubeadm version
I0203 18:45:59.507974  150721 checks.go:170] validating if the firewall is enabled and active
I0203 18:45:59.522063  150721 checks.go:205] validating availability of port 6443
I0203 18:45:59.522716  150721 checks.go:205] validating availability of port 10259
I0203 18:45:59.522739  150721 checks.go:205] validating availability of port 10257
I0203 18:45:59.522762  150721 checks.go:282] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0203 18:45:59.522777  150721 checks.go:282] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0203 18:45:59.522787  150721 checks.go:282] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0203 18:45:59.522795  150721 checks.go:282] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0203 18:45:59.522826  150721 checks.go:432] validating if the connectivity type is via proxy or direct
I0203 18:45:59.522898  150721 checks.go:471] validating http connectivity to first IP address in the CIDR
I0203 18:45:59.522923  150721 checks.go:471] validating http connectivity to first IP address in the CIDR
I0203 18:45:59.522941  150721 checks.go:106] validating the container runtime
I0203 18:45:59.614746  150721 checks.go:132] validating if the "docker" service is enabled and active
I0203 18:45:59.641397  150721 checks.go:331] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0203 18:45:59.641487  150721 checks.go:331] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0203 18:45:59.641520  150721 checks.go:649] validating whether swap is enabled or not
I0203 18:45:59.641575  150721 checks.go:372] validating the presence of executable conntrack
I0203 18:45:59.641611  150721 checks.go:372] validating the presence of executable ip
I0203 18:45:59.641631  150721 checks.go:372] validating the presence of executable iptables
I0203 18:45:59.641650  150721 checks.go:372] validating the presence of executable mount
I0203 18:45:59.641928  150721 checks.go:372] validating the presence of executable nsenter
I0203 18:45:59.642106  150721 checks.go:372] validating the presence of executable ebtables
I0203 18:45:59.642525  150721 checks.go:372] validating the presence of executable ethtool
I0203 18:45:59.642723  150721 checks.go:372] validating the presence of executable socat
I0203 18:45:59.643043  150721 checks.go:372] validating the presence of executable tc
I0203 18:45:59.643430  150721 checks.go:372] validating the presence of executable touch
I0203 18:45:59.643469  150721 checks.go:520] running all checks
I0203 18:45:59.737692  150721 checks.go:403] checking whether the given node name is valid and reachable using net.LookupHost
I0203 18:45:59.829481  150721 checks.go:618] validating kubelet version
I0203 18:45:59.990462  150721 checks.go:132] validating if the "kubelet" service is enabled and active
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0203 18:46:00.025136  150721 checks.go:205] validating availability of port 10250
I0203 18:46:00.025274  150721 checks.go:205] validating availability of port 2379
I0203 18:46:00.025302  150721 checks.go:205] validating availability of port 2380
I0203 18:46:00.025328  150721 checks.go:245] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Would pull the required images (like 'kubeadm config images pull')
[certs] Using certificateDir folder "/etc/kubernetes/tmp/kubeadm-init-dryrun594704947"
I0203 18:46:00.025501  150721 certs.go:111] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I0203 18:46:00.589905  150721 certs.go:519] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local <REDACTED>] and IPs [10.96.0.1 <REDACTED>]
[certs] Generating "apiserver-kubelet-client" certificate and key
I0203 18:46:01.321534  150721 certs.go:111] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I0203 18:46:01.564271  150721 certs.go:519] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I0203 18:46:01.840870  150721 certs.go:111] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I0203 18:46:02.636178  150721 certs.go:519] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost <REDACTED>] and IPs [<REDACTED> 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost <REDACTED>] and IPs [<REDACTED> 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I0203 18:46:04.007914  150721 certs.go:77] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes/tmp/kubeadm-init-dryrun594704947"
I0203 18:46:04.366103  150721 kubeconfig.go:103] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0203 18:46:04.537417  150721 kubeconfig.go:103] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0203 18:46:04.646530  150721 kubeconfig.go:103] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0203 18:46:05.201209  150721 kubeconfig.go:103] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/etc/kubernetes/tmp/kubeadm-init-dryrun594704947/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/etc/kubernetes/tmp/kubeadm-init-dryrun594704947/config.yaml"
[control-plane] Using manifest folder "/etc/kubernetes/tmp/kubeadm-init-dryrun594704947"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0203 18:46:05.593410  150721 manifests.go:99] [control-plane] getting StaticPodSpecs
I0203 18:46:05.594348  150721 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0203 18:46:05.594397  150721 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I0203 18:46:05.594406  150721 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0203 18:46:05.594409  150721 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I0203 18:46:05.594413  150721 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
I0203 18:46:05.607887  150721 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/tmp/kubeadm-init-dryrun594704947/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0203 18:46:05.608113  150721 manifests.go:99] [control-plane] getting StaticPodSpecs
I0203 18:46:05.609255  150721 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0203 18:46:05.609495  150721 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I0203 18:46:05.609511  150721 manifests.go:125] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0203 18:46:05.609527  150721 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0203 18:46:05.609540  150721 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0203 18:46:05.609553  150721 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I0203 18:46:05.609561  150721 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
I0203 18:46:05.611072  150721 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/tmp/kubeadm-init-dryrun594704947/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0203 18:46:05.611214  150721 manifests.go:99] [control-plane] getting StaticPodSpecs
I0203 18:46:05.611959  150721 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0203 18:46:05.613162  150721 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/tmp/kubeadm-init-dryrun594704947/kube-scheduler.yaml"
[dryrun] Would ensure that "/var/lib/etcd" directory is present
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/tmp/kubeadm-init-dryrun594704947"
I0203 18:46:05.614897  150721 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/tmp/kubeadm-init-dryrun594704947/etcd.yaml"
[dryrun] Wrote certificates, kubeconfig files and control plane manifests to the "/etc/kubernetes/tmp/kubeadm-init-dryrun594704947" directory
[dryrun] The certificates or kubeconfig files would not be printed due to their sensitive nature
[dryrun] Please examine the "/etc/kubernetes/tmp/kubeadm-init-dryrun594704947" directory for details about what would be written
[dryrun] Would write file "/etc/kubernetes/manifests/kube-apiserver.yaml" with content:
	apiVersion: v1
	kind: Pod
	metadata:
	  annotations:
	    kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: <REDACTED>:6443
	  creationTimestamp: null
	  labels:
	    component: kube-apiserver
	    tier: control-plane
	  name: kube-apiserver
	  namespace: kube-system
	spec:
	  containers:
	  - command:
	    - kube-apiserver
	    - --advertise-address=<REDACTED>
	    - --allow-privileged=true
	    - --authorization-mode=Node,RBAC
	    - --client-ca-file=/etc/kubernetes/pki/ca.crt
	    - --enable-admission-plugins=NodeRestriction
	    - --enable-bootstrap-token-auth=true
	    - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
	    - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
	    - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
	    - --etcd-servers=https://127.0.0.1:2379
	    - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
	    - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
	    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
	    - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
	    - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
	    - --requestheader-allowed-names=front-proxy-client
	    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
	    - --requestheader-extra-headers-prefix=X-Remote-Extra-
	    - --requestheader-group-headers=X-Remote-Group
	    - --requestheader-username-headers=X-Remote-User
	    - --secure-port=6443
	    - --service-account-issuer=https://kubernetes.default.svc.cluster.local
	    - --service-account-key-file=/etc/kubernetes/pki/sa.pub
	    - --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
	    - --service-cluster-ip-range=10.96.0.0/12
	    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
	    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
	    image: k8s.gcr.io/kube-apiserver:v1.22.6
	    imagePullPolicy: IfNotPresent
	    livenessProbe:
	      failureThreshold: 8
	      httpGet:
	        host: <REDACTED>
	        path: /livez
	        port: 6443
	        scheme: HTTPS
	      initialDelaySeconds: 10
	      periodSeconds: 10
	      timeoutSeconds: 15
	    name: kube-apiserver
	    readinessProbe:
	      failureThreshold: 3
	      httpGet:
	        host: <REDACTED>
	        path: /readyz
	        port: 6443
	        scheme: HTTPS
	      periodSeconds: 1
	      timeoutSeconds: 15
	    resources:
	      requests:
	        cpu: 250m
	    startupProbe:
	      failureThreshold: 24
	      httpGet:
	        host: <REDACTED>
	        path: /livez
	        port: 6443
	        scheme: HTTPS
	      initialDelaySeconds: 10
	      periodSeconds: 10
	      timeoutSeconds: 15
	    volumeMounts:
	    - mountPath: /etc/ssl/certs
	      name: ca-certs
	      readOnly: true
	    - mountPath: /etc/ca-certificates
	      name: etc-ca-certificates
	      readOnly: true
	    - mountPath: /etc/kubernetes/pki
	      name: k8s-certs
	      readOnly: true
	    - mountPath: /usr/local/share/ca-certificates
	      name: usr-local-share-ca-certificates
	      readOnly: true
	    - mountPath: /usr/share/ca-certificates
	      name: usr-share-ca-certificates
	      readOnly: true
	  hostNetwork: true
	  priorityClassName: system-node-critical
	  securityContext:
	    seccompProfile:
	      type: RuntimeDefault
	  volumes:
	  - hostPath:
	      path: /etc/ssl/certs
	      type: DirectoryOrCreate
	    name: ca-certs
	  - hostPath:
	      path: /etc/ca-certificates
	      type: DirectoryOrCreate
	    name: etc-ca-certificates
	  - hostPath:
	      path: /etc/kubernetes/pki
	      type: DirectoryOrCreate
	    name: k8s-certs
	  - hostPath:
	      path: /usr/local/share/ca-certificates
	      type: DirectoryOrCreate
	    name: usr-local-share-ca-certificates
	  - hostPath:
	      path: /usr/share/ca-certificates
	      type: DirectoryOrCreate
	    name: usr-share-ca-certificates
	status: {}
[dryrun] Would write file "/etc/kubernetes/manifests/kube-controller-manager.yaml" with content:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  labels:
	    component: kube-controller-manager
	    tier: control-plane
	  name: kube-controller-manager
	  namespace: kube-system
	spec:
	  containers:
	  - command:
	    - kube-controller-manager
	    - --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
	    - --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
	    - --bind-address=127.0.0.1
	    - --client-ca-file=/etc/kubernetes/pki/ca.crt
	    - --cluster-name=kubernetes
	    - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
	    - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
	    - --controllers=*,bootstrapsigner,tokencleaner
	    - --kubeconfig=/etc/kubernetes/controller-manager.conf
	    - --leader-elect=true
	    - --port=0
	    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
	    - --root-ca-file=/etc/kubernetes/pki/ca.crt
	    - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
	    - --use-service-account-credentials=true
	    image: k8s.gcr.io/kube-controller-manager:v1.22.6
	    imagePullPolicy: IfNotPresent
	    livenessProbe:
	      failureThreshold: 8
	      httpGet:
	        host: 127.0.0.1
	        path: /healthz
	        port: 10257
	        scheme: HTTPS
	      initialDelaySeconds: 10
	      periodSeconds: 10
	      timeoutSeconds: 15
	    name: kube-controller-manager
	    resources:
	      requests:
	        cpu: 200m
	    startupProbe:
	      failureThreshold: 24
	      httpGet:
	        host: 127.0.0.1
	        path: /healthz
	        port: 10257
	        scheme: HTTPS
	      initialDelaySeconds: 10
	      periodSeconds: 10
	      timeoutSeconds: 15
	    volumeMounts:
	    - mountPath: /etc/ssl/certs
	      name: ca-certs
	      readOnly: true
	    - mountPath: /etc/ca-certificates
	      name: etc-ca-certificates
	      readOnly: true
	    - mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
	      name: flexvolume-dir
	    - mountPath: /etc/kubernetes/pki
	      name: k8s-certs
	      readOnly: true
	    - mountPath: /etc/kubernetes/controller-manager.conf
	      name: kubeconfig
	      readOnly: true
	    - mountPath: /usr/local/share/ca-certificates
	      name: usr-local-share-ca-certificates
	      readOnly: true
	    - mountPath: /usr/share/ca-certificates
	      name: usr-share-ca-certificates
	      readOnly: true
	  hostNetwork: true
	  priorityClassName: system-node-critical
	  securityContext:
	    seccompProfile:
	      type: RuntimeDefault
	  volumes:
	  - hostPath:
	      path: /etc/ssl/certs
	      type: DirectoryOrCreate
	    name: ca-certs
	  - hostPath:
	      path: /etc/ca-certificates
	      type: DirectoryOrCreate
	    name: etc-ca-certificates
	  - hostPath:
	      path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
	      type: DirectoryOrCreate
	    name: flexvolume-dir
	  - hostPath:
	      path: /etc/kubernetes/pki
	      type: DirectoryOrCreate
	    name: k8s-certs
	  - hostPath:
	      path: /etc/kubernetes/controller-manager.conf
	      type: FileOrCreate
	    name: kubeconfig
	  - hostPath:
	      path: /usr/local/share/ca-certificates
	      type: DirectoryOrCreate
	    name: usr-local-share-ca-certificates
	  - hostPath:
	      path: /usr/share/ca-certificates
	      type: DirectoryOrCreate
	    name: usr-share-ca-certificates
	status: {}
[dryrun] Would write file "/etc/kubernetes/manifests/kube-scheduler.yaml" with content:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  labels:
	    component: kube-scheduler
	    tier: control-plane
	  name: kube-scheduler
	  namespace: kube-system
	spec:
	  containers:
	  - command:
	    - kube-scheduler
	    - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
	    - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
	    - --bind-address=127.0.0.1
	    - --kubeconfig=/etc/kubernetes/scheduler.conf
	    - --leader-elect=true
	    - --port=0
	    image: k8s.gcr.io/kube-scheduler:v1.22.6
	    imagePullPolicy: IfNotPresent
	    livenessProbe:
	      failureThreshold: 8
	      httpGet:
	        host: 127.0.0.1
	        path: /healthz
	        port: 10259
	        scheme: HTTPS
	      initialDelaySeconds: 10
	      periodSeconds: 10
	      timeoutSeconds: 15
	    name: kube-scheduler
	    resources:
	      requests:
	        cpu: 100m
	    startupProbe:
	      failureThreshold: 24
	      httpGet:
	        host: 127.0.0.1
	        path: /healthz
	        port: 10259
	        scheme: HTTPS
	      initialDelaySeconds: 10
	      periodSeconds: 10
	      timeoutSeconds: 15
	    volumeMounts:
	    - mountPath: /etc/kubernetes/scheduler.conf
	      name: kubeconfig
	      readOnly: true
	  hostNetwork: true
	  priorityClassName: system-node-critical
	  securityContext:
	    seccompProfile:
	      type: RuntimeDefault
	  volumes:
	  - hostPath:
	      path: /etc/kubernetes/scheduler.conf
	      type: FileOrCreate
	    name: kubeconfig
	status: {}
[dryrun] Would write file "/var/lib/kubelet/config.yaml" with content:
	apiVersion: kubelet.config.k8s.io/v1beta1
	authentication:
	  anonymous:
	    enabled: false
	  webhook:
	    cacheTTL: 0s
	    enabled: true
	  x509:
	    clientCAFile: /etc/kubernetes/pki/ca.crt
	authorization:
	  mode: Webhook
	  webhook:
	    cacheAuthorizedTTL: 0s
	    cacheUnauthorizedTTL: 0s
	cgroupDriver: systemd
	clusterDNS:
	- 10.96.0.10
	clusterDomain: cluster.local
	cpuManagerReconcilePeriod: 0s
	evictionPressureTransitionPeriod: 0s
	fileCheckFrequency: 0s
	healthzBindAddress: 127.0.0.1
	healthzPort: 10248
	httpCheckFrequency: 0s
	imageMinimumGCAge: 0s
	kind: KubeletConfiguration
	logging: {}
	memorySwap: {}
	nodeStatusReportFrequency: 0s
	nodeStatusUpdateFrequency: 0s
	rotateCertificates: true
	runtimeRequestTimeout: 0s
	shutdownGracePeriod: 0s
	shutdownGracePeriodCriticalPods: 0s
	staticPodPath: /etc/kubernetes/manifests
	streamingConnectionIdleTimeout: 0s
	syncFrequency: 0s
	volumeStatsAggPeriod: 0s
[dryrun] Would write file "/var/lib/kubelet/kubeadm-flags.env" with content:
	KUBELET_KUBEADM_ARGS="--network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.5"
I0203 18:46:05.618805  150721 waitcontrolplane.go:89] [wait-control-plane] Waiting for the API server to be healthy
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/tmp/kubeadm-init-dryrun594704947". This can take up to 4m0s
I0203 18:46:05.619419  150721 uploadconfig.go:110] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1"
[dryrun] Attached object:
	apiVersion: v1
	data:
	  ClusterConfiguration: |
	    apiServer:
	      extraArgs:
	        authorization-mode: Node,RBAC
	      timeoutForControlPlane: 4m0s
	    apiVersion: kubeadm.k8s.io/v1beta3
	    certificatesDir: /etc/kubernetes/pki
	    clusterName: kubernetes
	    controllerManager: {}
	    dns: {}
	    etcd:
	      local:
	        dataDir: /var/lib/etcd
	    imageRepository: k8s.gcr.io
	    kind: ClusterConfiguration
	    kubernetesVersion: v1.22.6
	    networking:
	      dnsDomain: cluster.local
	      serviceSubnet: 10.96.0.0/12
	    scheduler: {}
	kind: ConfigMap
	metadata:
	  creationTimestamp: null
	  name: kubeadm-config
	  namespace: kube-system
[dryrun] Would perform action CREATE on resource "roles" in API group "rbac.authorization.k8s.io/v1"
[dryrun] Attached object:
	apiVersion: rbac.authorization.k8s.io/v1
	kind: Role
	metadata:
	  creationTimestamp: null
	  name: kubeadm:nodes-kubeadm-config
	  namespace: kube-system
	rules:
	- apiGroups:
	  - ""
	  resourceNames:
	  - kubeadm-config
	  resources:
	  - configmaps
	  verbs:
	  - get
[dryrun] Would perform action CREATE on resource "rolebindings" in API group "rbac.authorization.k8s.io/v1"
[dryrun] Attached object:
	apiVersion: rbac.authorization.k8s.io/v1
	kind: RoleBinding
	metadata:
	  creationTimestamp: null
	  name: kubeadm:nodes-kubeadm-config
	  namespace: kube-system
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: Role
	  name: kubeadm:nodes-kubeadm-config
	subjects:
	- kind: Group
	  name: system:bootstrappers:kubeadm:default-node-token
	- kind: Group
	  name: system:nodes
I0203 18:46:05.627484  150721 uploadconfig.go:124] [upload-config] Uploading the kubelet component config to a ConfigMap
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1"
[dryrun] Attached object:
	apiVersion: v1
	data:
	  kubelet: |
	    apiVersion: kubelet.config.k8s.io/v1beta1
	    authentication:
	      anonymous:
	        enabled: false
	      webhook:
	        cacheTTL: 0s
	        enabled: true
	      x509:
	        clientCAFile: /etc/kubernetes/pki/ca.crt
	    authorization:
	      mode: Webhook
	      webhook:
	        cacheAuthorizedTTL: 0s
	        cacheUnauthorizedTTL: 0s
	    cgroupDriver: systemd
	    clusterDNS:
	    - 10.96.0.10
	    clusterDomain: cluster.local
	    cpuManagerReconcilePeriod: 0s
	    evictionPressureTransitionPeriod: 0s
	    fileCheckFrequency: 0s
	    healthzBindAddress: 127.0.0.1
	    healthzPort: 10248
	    httpCheckFrequency: 0s
	    imageMinimumGCAge: 0s
	    kind: KubeletConfiguration
	    logging: {}
	    memorySwap: {}
	    nodeStatusReportFrequency: 0s
	    nodeStatusUpdateFrequency: 0s
	    rotateCertificates: true
	    runtimeRequestTimeout: 0s
	    shutdownGracePeriod: 0s
	    shutdownGracePeriodCriticalPods: 0s
	    staticPodPath: /etc/kubernetes/manifests
	    streamingConnectionIdleTimeout: 0s
	    syncFrequency: 0s
	    volumeStatsAggPeriod: 0s
	kind: ConfigMap
	metadata:
	  annotations:
	    kubeadm.kubernetes.io/component-config.hash: sha256:4c9f421edef822211203b587a970fbb190b0bf200194fabe2d3065b5983da050
	  creationTimestamp: null
	  name: kubelet-config-1.22
	  namespace: kube-system
[dryrun] Would perform action CREATE on resource "roles" in API group "rbac.authorization.k8s.io/v1"
[dryrun] Attached object:
	apiVersion: rbac.authorization.k8s.io/v1
	kind: Role
	metadata:
	  creationTimestamp: null
	  name: kubeadm:kubelet-config-1.22
	  namespace: kube-system
	rules:
	- apiGroups:
	  - ""
	  resourceNames:
	  - kubelet-config-1.22
	  resources:
	  - configmaps
	  verbs:
	  - get
[dryrun] Would perform action CREATE on resource "rolebindings" in API group "rbac.authorization.k8s.io/v1"
[dryrun] Attached object:
	apiVersion: rbac.authorization.k8s.io/v1
	kind: RoleBinding
	metadata:
	  creationTimestamp: null
	  name: kubeadm:kubelet-config-1.22
	  namespace: kube-system
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: Role
	  name: kubeadm:kubelet-config-1.22
	subjects:
	- kind: Group
	  name: system:nodes
	- kind: Group
	  name: system:bootstrappers:kubeadm:default-node-token
I0203 18:46:05.631123  150721 uploadconfig.go:129] [upload-config] Preserving the CRISocket information for the control-plane node
I0203 18:46:05.631151  150721 patchnode.go:31] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "<REDACTED>" as an annotation
[dryrun] Would perform action GET on resource "nodes" in API group "core/v1"
[dryrun] Resource name: "<REDACTED>"
[dryrun] Would perform action PATCH on resource "nodes" in API group "core/v1"
[dryrun] Resource name: "<REDACTED>"
[dryrun] Attached patch:
	{"metadata":{"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock"}}}
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[dryrun] Would perform action GET on resource "secrets" in API group "core/v1"
[dryrun] Resource name: "bootstrap-token-nxp6o5"
[dryrun] Would perform action CREATE on resource "secrets" in API group "core/v1"
[dryrun] Attached object:
	apiVersion: v1
	data:
	  description: UHJveHkgZm9yIG1hbmFnaW5nIFRUTCBmb3IgdGhlIGt1YmVhZG0tY2VydHMgc2VjcmV0
	  expiration: MjAyMi0wMi0wM1QyMDo0NjowNlo=
	  token-id: bnhwNm81
	  token-secret: cHdxeW95YjFidjU4M2JnMg==
	kind: Secret
	metadata:
	  creationTimestamp: null
	  name: bootstrap-token-nxp6o5
	  namespace: kube-system
	type: bootstrap.kubernetes.io/token
[dryrun] Would perform action GET on resource "secrets" in API group "core/v1"
[dryrun] Resource name: "bootstrap-token-nxp6o5"
secrets "secret not found" not found
error to get token reference
k8s.io/kubernetes/cmd/kubeadm/app/phases/copycerts.getSecretOwnerRef
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/copycerts/copycerts.go:167
k8s.io/kubernetes/cmd/kubeadm/app/phases/copycerts.UploadCerts
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/copycerts/copycerts.go:105
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runUploadCerts
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/uploadcerts.go:71
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:153
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:852
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:960
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:897
k8s.io/kubernetes/cmd/kubeadm/app.Run
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
	_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
	/usr/local/go/src/runtime/proc.go:225
runtime.goexit
	/usr/local/go/src/runtime/asm_s390x.s:765
error uploading certs
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runUploadCerts
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/uploadcerts.go:72
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:153
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:852
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:960
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:897
k8s.io/kubernetes/cmd/kubeadm/app.Run
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
	_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
	/usr/local/go/src/runtime/proc.go:225
runtime.goexit
	/usr/local/go/src/runtime/asm_s390x.s:765
error execution phase upload-certs
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:153
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:852
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:960
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:897
k8s.io/kubernetes/cmd/kubeadm/app.Run
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
	_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
	/usr/local/go/src/runtime/proc.go:225
runtime.goexit
	/usr/local/go/src/runtime/asm_s390x.s:765
root@mvs018b:~# 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/dry-run help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
3 participants