Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sys-kernel/bootengine: Remove Torcx step #1344

Merged
merged 1 commit into from
Nov 7, 2023
Merged

Conversation

pothos
Copy link
Member

@pothos pothos commented Nov 6, 2023

This pulls in
flatcar/bootengine#77 to not try to run Torcx when /etc/torcx/next-profile exists.

How to use

Testing done

done
It boots successful even with the /etc/torcx/next-profile file present.

Copy link

github-actions bot commented Nov 6, 2023

Test report for 3776.0.0+nightly-20231102-2100 / amd64 arm64

Platforms tested : qemu_uefi-amd64 qemu_update-amd64 qemu_uefi-arm64 qemu_update-arm64

ok bpf.execsnoop 🟢 Succeeded: qemu_uefi-amd64 (1)

ok bpf.local-gadget 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.basic 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cgroupv1 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cloudinit.basic 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cloudinit.multipart-mime 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cloudinit.script 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid0.data 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid0.root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid1.data 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid1.root 🟢 Succeeded: qemu_uefi-amd64 (2); qemu_uefi-arm64 (1) ❌ Failed: qemu_uefi-amd64 (1)

                Diagnostic output for qemu_uefi-amd64, run 1
    L1: " Error: _raid.go:245: could not reboot machine: machine __f2dd3605-c713-47c5-81d7-4334f52d4050__ failed basic checks: some systemd units failed:"
    L2: "??? ldconfig.service loaded failed failed Rebuild Dynamic Linker Cache"
    L3: "status: "
    L4: "journal:-- No entries --_"
    L5: " "
    L6: "  "

ok cl.etcd-member.discovery 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.etcd-member.etcdctlv3 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.etcd-member.v2-backup-restore 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.filesystem 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.flannel.udp 🟢 Succeeded: qemu_uefi-amd64 (1)

ok cl.flannel.vxlan 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.instantiated.enable-unit 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.kargs 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.luks 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.indirect 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.indirect.new 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.regular 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (2) ❌ Failed: qemu_uefi-arm64 (1)

                Diagnostic output for qemu_uefi-arm64, run 1
    L1: "  "
    L2: " Error: _oem.go:199: Couldn_t reboot machine: machine __5866e547-d2f1-45e7-bcfc-01a0d96cdcb6__ failed basic checks: some systemd units failed:"
    L3: "??? ldconfig.service loaded failed failed Rebuild Dynamic Linker Cache"
    L4: "status: "
    L5: "journal:-- No entries --"
    L6: "harness.go:583: Found systemd unit failed to start (?[0;1;39mldconfig.s???0m - Rebuild Dynamic Linker Cache. ) on machine 5866e547-d2f1-45e7-bcfc-01a0d96cdcb6 console_"
    L7: " "

ok cl.ignition.oem.regular.new 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.reuse 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.wipe 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.symlink 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.translation 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.btrfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.ext4root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.groups 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.once 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.sethostname 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.users 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.xfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.btrfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.ext4root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.users 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.xfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2_1.ext4checkexisting 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2_1.swap 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2_1.vfat 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.install.cloudinit 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.internet 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.locksmith.cluster 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.misc.falco 🟢 Succeeded: qemu_uefi-amd64 (1)

ok cl.network.initramfs.second-boot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.network.listeners 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.network.wireguard 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.omaha.ping 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.osreset.ignition-rerun 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.overlay.cleanup 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.swap_activation 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.sysext.boot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.sysext.fallbackdownload # SKIP 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.toolbox.dnf-install 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.update.badverity 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.update.grubnop 🟢 Succeeded: qemu_uefi-amd64 (1)

ok cl.update.payload 🟢 Succeeded: qemu_update-amd64 (1); qemu_update-arm64 (1)

ok cl.update.reboot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.users.shells 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.verity 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.auth.verify 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.groups 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.once 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.resource.local 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.resource.remote 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.resource.s3.versioned 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.security.tls 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.sethostname 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.systemd.enable-service 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.locksmith.reboot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.locksmith.tls 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.selinux.boolean 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.selinux.enforce 🟢 Succeeded: qemu_uefi-amd64 (2); qemu_uefi-arm64 (1) ❌ Failed: qemu_uefi-amd64 (1)

                Diagnostic output for qemu_uefi-amd64, run 1
    L1: " Error: _selinux.go:115: failed to reboot machine: machine __e848a397-d274-41c3-b90f-44628a50b3c4__ failed basic checks: some systemd units failed:"
    L2: "??? ldconfig.service loaded failed failed Rebuild Dynamic Linker Cache"
    L3: "status: "
    L4: "journal:-- No entries --_"
    L5: " "
    L6: "  "

ok coreos.tls.fetch-urls 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.update.badusr 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok devcontainer.docker 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok devcontainer.systemd-nspawn 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.btrfs-storage 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.containerd-restart 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.devicemapper-storage 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.enable-service.sysext 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.lib-coreos-dockerd-compat 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.network 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.selinux 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.userns 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok extra-test.[first_dual].cl.update.docker-btrfs-compat 🟢 Succeeded: qemu_update-amd64 (1); qemu_update-arm64 (1)

ok extra-test.[first_dual].cl.update.payload 🟢 Succeeded: qemu_update-amd64 (1); qemu_update-arm64 (1)

ok kubeadm.v1.25.10.calico.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.25.10.calico.cgroupv1.base 🟢 Succeeded: qemu_uefi-amd64 (2); qemu_uefi-arm64 (2) ❌ Failed: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

                Diagnostic output for qemu_uefi-arm64, run 1
    L1: " Error: _cluster.go:125: I1106 20:58:35.912894    1787 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.25"
    L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.25.15"
    L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.25.15"
    L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.25.15"
    L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.25.15"
    L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.8"
    L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.6-0"
    L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.9.3"
    L9: "cluster.go:125: I1106 20:58:45.345988    1943 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.25"
    L10: "cluster.go:125: [init] Using Kubernetes version: v1.25.15"
    L11: "cluster.go:125: [preflight] Running pre-flight checks"
    L12: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L13: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L14: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L15: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L16: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L17: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L18: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.1?49]"
    L19: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L20: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L21: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L22: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L23: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L26: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L27: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L28: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L29: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L30: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L31: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L32: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L33: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L34: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L35: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L36: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L37: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L40: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L41: "cluster.go:125: [apiclient] All control plane components are healthy after 4.503155 seconds"
    L42: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L43: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L44: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L45: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L46: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L47: "cluster.go:125: [bootstrap-token] Using token: ctab05.x5mu60ry4tgv1wdh"
    L48: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L49: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L50: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L51: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L52: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L53: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L54: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L55: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L56: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L57: "cluster.go:125: "
    L58: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L59: "cluster.go:125: "
    L60: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L61: "cluster.go:125: "
    L62: "cluster.go:125:   mkdir -p $HOME/.kube"
    L63: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L64: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L65: "cluster.go:125: "
    L66: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L67: "cluster.go:125: "
    L68: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L69: "cluster.go:125: "
    L70: "cluster.go:125: You should now deploy a pod network to the cluster."
    L71: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L72: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L73: "cluster.go:125: "
    L74: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L75: "cluster.go:125: "
    L76: "cluster.go:125: kubeadm join 10.0.0.149:6443 --token ctab05.x5mu60ry4tgv1wdh _"
    L77: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:cf76cf68ba0af7b56620833e61be778ebeec703fcbdade8e9f5c3c9e75c7df85 "
    L78: "cluster.go:125: namespace/tigera-operator created"
    L79: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created"
    L80: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created"
    L81: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created"
    L82: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created"
    L83: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created"
    L84: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created"
    L85: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created"
    L86: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created"
    L87: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created"
    L88: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created"
    L89: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created"
    L90: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created"
    L91: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created"
    L92: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created"
    L93: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created"
    L94: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created"
    L95: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created"
    L96: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created"
    L97: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created"
    L98: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created"
    L99: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created"
    L100: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created"
    L101: "cluster.go:125: serviceaccount/tigera-operator created"
    L102: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
    L103: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
    L104: "cluster.go:125: deployment.apps/tigera-operator created"
    L105: "cluster.go:125: error: .status.conditions accessor error: <nil_ is of the type <nil_, expected []interface{}"
    L106: "kubeadm.go:285: unable to setup cluster: unable to run master script: Process exited with status 1_"
    L107: " "
    L108: " Error: _cluster.go:125: I1106 20:48:10.637001    1724 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.25"
    L109: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.25.15"
    L110: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.25.15"
    L111: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.25.15"
    L112: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.25.15"
    L113: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.8"
    L114: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.6-0"
    L115: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.9.3"
    L116: "cluster.go:125: I1106 20:48:27.303783    1886 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.25"
    L117: "cluster.go:125: [init] Using Kubernetes version: v1.25.15"
    L118: "cluster.go:125: [preflight] Running pre-flight checks"
    L119: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L120: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L121: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L122: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L123: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L124: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L125: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.1?6]"
    L126: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L127: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L128: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L129: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L130: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L131: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L132: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L133: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L134: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L135: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L136: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L137: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L138: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L139: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L140: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L141: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L142: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L143: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L144: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L145: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L146: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L147: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L148: "cluster.go:125: [apiclient] All control plane components are healthy after 6.004442 seconds"
    L149: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L150: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L151: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L152: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L153: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L154: "cluster.go:125: [bootstrap-token] Using token: 4pp957.9boslq8dosi52nmz"
    L155: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L156: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L157: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L158: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L159: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L160: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L161: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L162: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L163: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L164: "cluster.go:125: "
    L165: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L166: "cluster.go:125: "
    L167: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L168: "cluster.go:125: "
    L169: "cluster.go:125:   mkdir -p $HOME/.kube"
    L170: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L171: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L172: "cluster.go:125: "
    L173: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L174: "cluster.go:125: "
    L175: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L176: "cluster.go:125: "
    L177: "cluster.go:125: You should now deploy a pod network to the cluster."
    L178: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L179: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L180: "cluster.go:125: "
    L181: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L182: "cluster.go:125: "
    L183: "cluster.go:125: kubeadm join 10.0.0.16:6443 --token 4pp957.9boslq8dosi52nmz _"
    L184: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:ef0050b7891f133e2cd0062726195539d4d7ac98e17b5127bfc51f16cc47f66a "
    L185: "cluster.go:125: namespace/tigera-operator created"
    L186: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created"
    L187: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created"
    L188: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created"
    L189: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created"
    L190: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created"
    L191: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created"
    L192: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created"
    L193: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created"
    L194: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created"
    L195: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created"
    L196: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created"
    L197: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created"
    L198: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created"
    L199: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created"
    L200: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created"
    L201: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created"
    L202: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created"
    L203: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created"
    L204: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created"
    L205: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created"
    L206: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created"
    L207: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created"
    L208: "cluster.go:125: serviceaccount/tigera-operator created"
    L209: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
    L210: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
    L211: "cluster.go:125: deployment.apps/tigera-operator created"
    L212: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io condition met"
    L213: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io condition met"
    L214: "cluster.go:125: installation.operator.tigera.io/default created"
    L215: "cluster.go:125: apiserver.operator.tigera.io/default created"
    L216: "cluster.go:125: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L217: "harness.go:583: Found emergency shell on machine 5e1e8859-aa6b-4331-9148-a4d8a0b22008 console"
    L218: "harness.go:583: Found systemd unit failed to start (?[0;1;39mignition-f???es.service?[0m - Ignition (files). ) on machine 5e1e8859-aa6b-4331-9148-a4d8a0b22008 console"
    L219: "harness.go:583: Found systemd dependency unit failed to start (?[0;1;39migni???te.target?[0m - Ignition Complete. ) on machine 5e1e8859-aa6b-4331-9148-a4d8a0b22008 console_"
    L220: " "
                Diagnostic output for qemu_uefi-amd64, run 1
    L1: " Error: _cluster.go:125: I1106 20:58:35.912894    1787 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.25"
    L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.25.15"
    L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.25.15"
    L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.25.15"
    L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.25.15"
    L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.8"
    L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.6-0"
    L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.9.3"
    L9: "cluster.go:125: I1106 20:58:45.345988    1943 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.25"
    L10: "cluster.go:125: [init] Using Kubernetes version: v1.25.15"
    L11: "cluster.go:125: [preflight] Running pre-flight checks"
    L12: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L13: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L14: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L15: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L16: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L17: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L18: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.1?49]"
    L19: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L20: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L21: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L22: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L23: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L26: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L27: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L28: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L29: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L30: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L31: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L32: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L33: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L34: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L35: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L36: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L37: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L40: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L41: "cluster.go:125: [apiclient] All control plane components are healthy after 4.503155 seconds"
    L42: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L43: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L44: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L45: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L46: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L47: "cluster.go:125: [bootstrap-token] Using token: ctab05.x5mu60ry4tgv1wdh"
    L48: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L49: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L50: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L51: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L52: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L53: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L54: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L55: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L56: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L57: "cluster.go:125: "
    L58: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L59: "cluster.go:125: "
    L60: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L61: "cluster.go:125: "
    L62: "cluster.go:125:   mkdir -p $HOME/.kube"
    L63: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L64: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L65: "cluster.go:125: "
    L66: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L67: "cluster.go:125: "
    L68: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L69: "cluster.go:125: "
    L70: "cluster.go:125: You should now deploy a pod network to the cluster."
    L71: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L72: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L73: "cluster.go:125: "
    L74: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L75: "cluster.go:125: "
    L76: "cluster.go:125: kubeadm join 10.0.0.149:6443 --token ctab05.x5mu60ry4tgv1wdh _"
    L77: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:cf76cf68ba0af7b56620833e61be778ebeec703fcbdade8e9f5c3c9e75c7df85 "
    L78: "cluster.go:125: namespace/tigera-operator created"
    L79: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created"
    L80: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created"
    L81: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created"
    L82: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created"
    L83: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created"
    L84: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created"
    L85: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created"
    L86: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created"
    L87: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created"
    L88: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created"
    L89: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created"
    L90: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created"
    L91: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created"
    L92: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created"
    L93: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created"
    L94: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created"
    L95: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created"
    L96: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created"
    L97: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created"
    L98: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created"
    L99: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created"
    L100: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created"
    L101: "cluster.go:125: serviceaccount/tigera-operator created"
    L102: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
    L103: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
    L104: "cluster.go:125: deployment.apps/tigera-operator created"
    L105: "cluster.go:125: error: .status.conditions accessor error: <nil_ is of the type <nil_, expected []interface{}"
    L106: "kubeadm.go:285: unable to setup cluster: unable to run master script: Process exited with status 1_"
    L107: " "
    L108: " Error: _cluster.go:125: I1106 20:48:10.637001    1724 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.25"
    L109: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.25.15"
    L110: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.25.15"
    L111: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.25.15"
    L112: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.25.15"
    L113: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.8"
    L114: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.6-0"
    L115: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.9.3"
    L116: "cluster.go:125: I1106 20:48:27.303783    1886 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.25"
    L117: "cluster.go:125: [init] Using Kubernetes version: v1.25.15"
    L118: "cluster.go:125: [preflight] Running pre-flight checks"
    L119: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L120: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L121: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L122: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L123: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L124: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L125: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.1?6]"
    L126: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L127: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L128: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L129: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L130: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L131: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L132: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L133: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L134: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L135: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L136: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L137: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L138: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L139: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L140: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L141: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L142: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L143: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L144: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L145: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L146: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L147: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L148: "cluster.go:125: [apiclient] All control plane components are healthy after 6.004442 seconds"
    L149: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L150: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L151: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L152: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L153: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L154: "cluster.go:125: [bootstrap-token] Using token: 4pp957.9boslq8dosi52nmz"
    L155: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L156: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L157: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L158: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L159: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L160: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L161: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L162: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L163: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L164: "cluster.go:125: "
    L165: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L166: "cluster.go:125: "
    L167: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L168: "cluster.go:125: "
    L169: "cluster.go:125:   mkdir -p $HOME/.kube"
    L170: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L171: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L172: "cluster.go:125: "
    L173: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L174: "cluster.go:125: "
    L175: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L176: "cluster.go:125: "
    L177: "cluster.go:125: You should now deploy a pod network to the cluster."
    L178: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L179: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L180: "cluster.go:125: "
    L181: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L182: "cluster.go:125: "
    L183: "cluster.go:125: kubeadm join 10.0.0.16:6443 --token 4pp957.9boslq8dosi52nmz _"
    L184: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:ef0050b7891f133e2cd0062726195539d4d7ac98e17b5127bfc51f16cc47f66a "
    L185: "cluster.go:125: namespace/tigera-operator created"
    L186: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created"
    L187: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created"
    L188: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created"
    L189: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created"
    L190: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created"
    L191: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created"
    L192: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created"
    L193: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created"
    L194: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created"
    L195: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created"
    L196: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created"
    L197: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created"
    L198: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created"
    L199: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created"
    L200: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created"
    L201: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created"
    L202: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created"
    L203: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created"
    L204: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created"
    L205: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created"
    L206: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created"
    L207: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created"
    L208: "cluster.go:125: serviceaccount/tigera-operator created"
    L209: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
    L210: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
    L211: "cluster.go:125: deployment.apps/tigera-operator created"
    L212: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io condition met"
    L213: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io condition met"
    L214: "cluster.go:125: installation.operator.tigera.io/default created"
    L215: "cluster.go:125: apiserver.operator.tigera.io/default created"
    L216: "cluster.go:125: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L217: "harness.go:583: Found emergency shell on machine 5e1e8859-aa6b-4331-9148-a4d8a0b22008 console"
    L218: "harness.go:583: Found systemd unit failed to start (?[0;1;39mignition-f???es.service?[0m - Ignition (files). ) on machine 5e1e8859-aa6b-4331-9148-a4d8a0b22008 console"
    L219: "harness.go:583: Found systemd dependency unit failed to start (?[0;1;39migni???te.target?[0m - Ignition Complete. ) on machine 5e1e8859-aa6b-4331-9148-a4d8a0b22008 console_"
    L220: " "

ok kubeadm.v1.25.10.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.25.10.cilium.cgroupv1.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.25.10.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (2) ❌ Failed: qemu_uefi-arm64 (1)

                Diagnostic output for qemu_uefi-arm64, run 1
    L1: "  "
    L2: " Error: _cluster.go:125: I1106 21:00:20.741318    1535 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.25"
    L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.25.15"
    L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.25.15"
    L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.25.15"
    L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.25.15"
    L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.8"
    L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.6-0"
    L9: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.9.3"
    L10: "cluster.go:125: I1106 21:00:34.364191    1695 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.25"
    L11: "cluster.go:125: [init] Using Kubernetes version: v1.25.15"
    L12: "cluster.go:125: [preflight] Running pre-flight checks"
    L13: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L14: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L15: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L16: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L17: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L18: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L19: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.1?40]"
    L20: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L21: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L22: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L23: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L27: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L28: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L29: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L30: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L31: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L32: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L33: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L34: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L35: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L36: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L37: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L41: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L42: "cluster.go:125: [apiclient] All control plane components are healthy after 7.504925 seconds"
    L43: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L44: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L45: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L46: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L47: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L48: "cluster.go:125: [bootstrap-token] Using token: 6if2yj.h8ipryb1vj4i1z4z"
    L49: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L50: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L51: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L52: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L53: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L54: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L55: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L56: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L57: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L58: "cluster.go:125: "
    L59: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L60: "cluster.go:125: "
    L61: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L62: "cluster.go:125: "
    L63: "cluster.go:125:   mkdir -p $HOME/.kube"
    L64: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L65: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L66: "cluster.go:125: "
    L67: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L68: "cluster.go:125: "
    L69: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L70: "cluster.go:125: "
    L71: "cluster.go:125: You should now deploy a pod network to the cluster."
    L72: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L73: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L74: "cluster.go:125: "
    L75: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L76: "cluster.go:125: "
    L77: "cluster.go:125: kubeadm join 10.0.0.140:6443 --token 6if2yj.h8ipryb1vj4i1z4z _"
    L78: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:7850d03ff21f5b665647039244348e0b0e7404ccca72478990638f478cafbba9 "
    L79: "cluster.go:125: namespace/kube-flannel created"
    L80: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/flannel created"
    L81: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/flannel created"
    L82: "cluster.go:125: serviceaccount/flannel created"
    L83: "cluster.go:125: configmap/kube-flannel-cfg created"
    L84: "cluster.go:125: daemonset.apps/kube-flannel-ds created"
    L85: "cluster.go:125: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L86: "harness.go:583: Found emergency shell on machine f81eedd7-4efc-44fd-b44e-9bb9deafe349 console"
    L87: "harness.go:583: Found systemd unit failed to start (?[0;1;39mignition-f???es.service?[0m - Ignition (files). ) on machine f81eedd7-4efc-44fd-b44e-9bb9deafe349 console"
    L88: "harness.go:583: Found systemd dependency unit failed to start (?[0;1;39migni???0m - Ignition (record completion). ) on machine f81eedd7-4efc-44fd-b44e-9bb9deafe349 console_"
    L89: " "

ok kubeadm.v1.25.10.flannel.cgroupv1.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.26.5.calico.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (2) ❌ Failed: qemu_uefi-arm64 (1)

                Diagnostic output for qemu_uefi-arm64, run 1
    L1: "  "
    L2: " Error: _cluster.go:125: I1106 20:56:11.029275    1559 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.26"
    L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.26.10"
    L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.26.10"
    L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.26.10"
    L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.26.10"
    L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.9"
    L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.6-0"
    L9: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.9.3"
    L10: "cluster.go:125: I1106 20:56:26.227292    1719 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.26"
    L11: "cluster.go:125: [init] Using Kubernetes version: v1.26.10"
    L12: "cluster.go:125: [preflight] Running pre-flight checks"
    L13: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L14: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L15: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L16: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L17: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L18: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L19: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.6?2]"
    L20: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L21: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L22: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L23: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L27: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L28: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L29: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L30: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L31: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L32: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L33: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L34: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L35: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L36: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L37: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L41: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L42: "cluster.go:125: [apiclient] All control plane components are healthy after 7.503938 seconds"
    L43: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L44: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L45: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L46: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L47: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L48: "cluster.go:125: [bootstrap-token] Using token: 6o9c3p.7w312ug493v69gsc"
    L49: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L50: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L51: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L52: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L53: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L54: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L55: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L56: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L57: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L58: "cluster.go:125: "
    L59: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L60: "cluster.go:125: "
    L61: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L62: "cluster.go:125: "
    L63: "cluster.go:125:   mkdir -p $HOME/.kube"
    L64: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L65: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L66: "cluster.go:125: "
    L67: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L68: "cluster.go:125: "
    L69: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L70: "cluster.go:125: "
    L71: "cluster.go:125: You should now deploy a pod network to the cluster."
    L72: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L73: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L74: "cluster.go:125: "
    L75: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L76: "cluster.go:125: "
    L77: "cluster.go:125: kubeadm join 10.0.0.62:6443 --token 6o9c3p.7w312ug493v69gsc _"
    L78: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:52fe4fb732e57a0a234185f93e885f379bb5f154d18130fef036395c1c1251a5 "
    L79: "cluster.go:125: namespace/tigera-operator created"
    L80: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created"
    L81: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created"
    L82: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created"
    L83: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created"
    L84: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created"
    L85: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created"
    L86: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created"
    L87: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created"
    L88: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created"
    L89: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created"
    L90: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created"
    L91: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created"
    L92: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created"
    L93: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created"
    L94: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created"
    L95: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created"
    L96: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created"
    L97: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created"
    L98: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created"
    L99: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created"
    L100: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created"
    L101: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created"
    L102: "cluster.go:125: serviceaccount/tigera-operator created"
    L103: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
    L104: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
    L105: "cluster.go:125: deployment.apps/tigera-operator created"
    L106: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io condition met"
    L107: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io condition met"
    L108: "cluster.go:125: installation.operator.tigera.io/default created"
    L109: "cluster.go:125: apiserver.operator.tigera.io/default created"
    L110: "cluster.go:125: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L111: "harness.go:583: Found emergency shell on machine 8f90f450-2ccf-416b-b4b4-2885d2277e0f console"
    L112: "harness.go:583: Found systemd unit failed to start (?[0;1;39mignition-f???es.service?[0m - Ignition (files). ) on machine 8f90f450-2ccf-416b-b4b4-2885d2277e0f console"
    L113: "harness.go:583: Found systemd dependency unit failed to start (?[0;1;39migni???te.target?[0m - Ignition Complete. ) on machine 8f90f450-2ccf-416b-b4b4-2885d2277e0f console_"
    L114: " "

ok kubeadm.v1.26.5.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (3); qemu_uefi-arm64 (1) ❌ Failed: qemu_uefi-amd64 (1, 2)

                Diagnostic output for qemu_uefi-amd64, run 2
    L1: " Error: _cluster.go:125: I1106 21:11:08.671078    1626 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.26"
    L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.26.10"
    L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.26.10"
    L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.26.10"
    L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.26.10"
    L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.9"
    L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.6-0"
    L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.9.3"
    L9: "cluster.go:125: I1106 21:11:19.793605    1783 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.26"
    L10: "cluster.go:125: [init] Using Kubernetes version: v1.26.10"
    L11: "cluster.go:125: [preflight] Running pre-flight checks"
    L12: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L13: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L14: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L15: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L16: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L17: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L18: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.8?]"
    L19: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L20: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L21: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L22: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L23: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L26: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L27: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L28: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L29: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L30: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L31: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L32: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L33: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L34: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L35: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L36: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L37: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L40: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L41: "cluster.go:125: [apiclient] All control plane components are healthy after 4.502080 seconds"
    L42: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L43: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L44: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L45: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L46: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L47: "cluster.go:125: [bootstrap-token] Using token: yezjxv.58a7vuvpsynb75lo"
    L48: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L49: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L50: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L51: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L52: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L53: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L54: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L55: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L56: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L57: "cluster.go:125: "
    L58: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L59: "cluster.go:125: "
    L60: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L61: "cluster.go:125: "
    L62: "cluster.go:125:   mkdir -p $HOME/.kube"
    L63: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L64: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L65: "cluster.go:125: "
    L66: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L67: "cluster.go:125: "
    L68: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L69: "cluster.go:125: "
    L70: "cluster.go:125: You should now deploy a pod network to the cluster."
    L71: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L72: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L73: "cluster.go:125: "
    L74: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L75: "cluster.go:125: "
    L76: "cluster.go:125: kubeadm join 10.0.0.8:6443 --token yezjxv.58a7vuvpsynb75lo _"
    L77: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:18add4cb7d550829c6afc82f6fcfaddf6efad805a854ba025299f49241116edb "
    L78: "cluster.go:125: i  Using Cilium version 1.12.5"
    L79: "cluster.go:125: ? Auto-detected cluster name: kubernetes"
    L80: "cluster.go:125: ? Auto-detected datapath mode: tunnel"
    L81: "cluster.go:125: ? Auto-detected kube-proxy has been installed"
    L82: "cluster.go:125: i  helm template --namespace kube-system cilium cilium/cilium --version 1.12.5 --set cluster.id=0,cluster.name=kubernetes,encryption.nodeEncryption=false,extraConfig.cluster-pool-ipv4-?cidr=192.168.0.0/17,extraConfig.enable-endpoint-routes=true,kubeProxyReplacement=disabled,operator.replicas=1,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,tunnel=vx?lan"
    L83: "cluster.go:125: i  Storing helm values file in kube-system/cilium-cli-helm-values Secret"
    L84: "cluster.go:125: ? Created CA in secret cilium-ca"
    L85: "cluster.go:125: ? Generating certificates for Hubble..."
    L86: "cluster.go:125: ? Creating Service accounts..."
    L87: "cluster.go:125: ? Creating Cluster roles..."
    L88: "cluster.go:125: ? Creating ConfigMap for Cilium version 1.12.5..."
    L89: "cluster.go:125: i  Manual overwrite in ConfigMap: enable-endpoint-routes=true"
    L90: "cluster.go:125: i  Manual overwrite in ConfigMap: cluster-pool-ipv4-cidr=192.168.0.0/17"
    L91: "cluster.go:125: ? Creating Agent DaemonSet..."
    L92: "cluster.go:125: ? Creating Operator Deployment..."
    L93: "cluster.go:125: ? Waiting for Cilium to be installed and ready..."
    L94: "cluster.go:125: ? Cilium was successfully installed! Run _cilium status_ to view installation health"
    L95: "cluster.go:125: ?[33m    /??_"
    L96: "cluster.go:125: ?[36m /???[33m___/?[32m??_?[0m    Cilium:         ?[32mOK?[0m"
    L97: "cluster.go:125: ?[36m ___?[31m/??_?[32m__/?[0m    Operator:       ?[32mOK?[0m"
    L98: "cluster.go:125: ?[32m /???[31m___/?[35m??_?[0m    Hubble:         ?[36mdisabled?[0m"
    L99: "cluster.go:125: ?[32m ___?[34m/??_?[35m__/?[0m    ClusterMesh:    ?[36mdisabled?[0m"
    L100: "cluster.go:125: ?[34m    ___/"
    L101: "cluster.go:125: ?[0m"
    L102: "cluster.go:125: Deployment       cilium-operator    "
    L103: "cluster.go:125: DaemonSet        cilium             "
    L104: "cluster.go:125: Containers:      cilium-operator    "
    L105: "cluster.go:125:                  cilium             "
    L106: "cluster.go:125: Cluster Pods:    0/0 managed by Cilium"
    L107: "cluster.go:125: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L108: "--- FAIL: kubeadm.v1.26.5.cilium.base/node_readiness (181.21s)"
    L109: "kubeadm.go:301: nodes are not ready: ready nodes should be equal to 2: 1_"
    L110: " "
                Diagnostic output for qemu_uefi-amd64, run 1
    L1: " Error: _cluster.go:125: I1106 20:55:18.738800    1611 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.26"
    L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.26.10"
    L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.26.10"
    L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.26.10"
    L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.26.10"
    L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.9"
    L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.6-0"
    L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.9.3"
    L9: "cluster.go:125: I1106 20:55:29.599054    1772 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.26"
    L10: "cluster.go:125: [init] Using Kubernetes version: v1.26.10"
    L11: "cluster.go:125: [preflight] Running pre-flight checks"
    L12: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L13: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L14: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L15: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L16: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L17: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L18: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.1?16]"
    L19: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L20: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L21: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L22: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L23: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L26: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L27: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L28: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L29: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L30: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L31: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L32: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L33: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L34: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L35: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L36: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L37: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L40: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L41: "cluster.go:125: [apiclient] All control plane components are healthy after 4.502111 seconds"
    L42: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L43: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L44: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L45: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L46: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L47: "cluster.go:125: [bootstrap-token] Using token: xvbny0.3em4jguls3wp7nyf"
    L48: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L49: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L50: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L51: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L52: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L53: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L54: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L55: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L56: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L57: "cluster.go:125: "
    L58: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L59: "cluster.go:125: "
    L60: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L61: "cluster.go:125: "
    L62: "cluster.go:125:   mkdir -p $HOME/.kube"
    L63: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L64: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L65: "cluster.go:125: "
    L66: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L67: "cluster.go:125: "
    L68: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L69: "cluster.go:125: "
    L70: "cluster.go:125: You should now deploy a pod network to the cluster."
    L71: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L72: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L73: "cluster.go:125: "
    L74: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L75: "cluster.go:125: "
    L76: "cluster.go:125: kubeadm join 10.0.0.116:6443 --token xvbny0.3em4jguls3wp7nyf _"
    L77: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:7387313a3958d4b87b5237dbdbfa0db28ae2686cd87d204ff561a1a0025b0001 "
    L78: "cluster.go:125: i  Using Cilium version 1.12.5"
    L79: "cluster.go:125: ? Auto-detected cluster name: kubernetes"
    L80: "cluster.go:125: ? Auto-detected datapath mode: tunnel"
    L81: "cluster.go:125: ? Auto-detected kube-proxy has been installed"
    L82: "cluster.go:125: i  helm template --namespace kube-system cilium cilium/cilium --version 1.12.5 --set cluster.id=0,cluster.name=kubernetes,encryption.nodeEncryption=false,extraConfig.cluster-pool-ipv4-?cidr=192.168.0.0/17,extraConfig.enable-endpoint-routes=true,kubeProxyReplacement=disabled,operator.replicas=1,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,tunnel=vx?lan"
    L83: "cluster.go:125: i  Storing helm values file in kube-system/cilium-cli-helm-values Secret"
    L84: "cluster.go:125: ? Created CA in secret cilium-ca"
    L85: "cluster.go:125: ? Generating certificates for Hubble..."
    L86: "cluster.go:125: ? Creating Service accounts..."
    L87: "cluster.go:125: ? Creating Cluster roles..."
    L88: "cluster.go:125: ? Creating ConfigMap for Cilium version 1.12.5..."
    L89: "cluster.go:125: i  Manual overwrite in ConfigMap: enable-endpoint-routes=true"
    L90: "cluster.go:125: i  Manual overwrite in ConfigMap: cluster-pool-ipv4-cidr=192.168.0.0/17"
    L91: "cluster.go:125: ? Creating Agent DaemonSet..."
    L92: "cluster.go:125: ? Creating Operator Deployment..."
    L93: "cluster.go:125: ? Waiting for Cilium to be installed and ready..."
    L94: "cluster.go:125: ? Cilium was successfully installed! Run _cilium status_ to view installation health"
    L95: "cluster.go:125: ?[33m    /??_"
    L96: "cluster.go:125: ?[36m /???[33m___/?[32m??_?[0m    Cilium:         ?[32mOK?[0m"
    L97: "cluster.go:125: ?[36m ___?[31m/??_?[32m__/?[0m    Operator:       ?[32mOK?[0m"
    L98: "cluster.go:125: ?[32m /???[31m___/?[35m??_?[0m    Hubble:         ?[36mdisabled?[0m"
    L99: "cluster.go:125: ?[32m ___?[34m/??_?[35m__/?[0m    ClusterMesh:    ?[36mdisabled?[0m"
    L100: "cluster.go:125: ?[34m    ___/"
    L101: "cluster.go:125: ?[0m"
    L102: "cluster.go:125: Deployment       cilium-operator    "
    L103: "cluster.go:125: DaemonSet        cilium             "
    L104: "cluster.go:125: Containers:      cilium             "
    L105: "cluster.go:125:                  cilium-operator    "
    L106: "cluster.go:125: Cluster Pods:    0/0 managed by Cilium"
    L107: "cluster.go:125: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L108: "--- FAIL: kubeadm.v1.26.5.cilium.base/node_readiness (181.23s)"
    L109: "kubeadm.go:301: nodes are not ready: ready nodes should be equal to 2: 1_"
    L110: " "
    L111: "  "

ok kubeadm.v1.26.5.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.27.2.calico.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.27.2.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (4) ❌ Failed: qemu_uefi-arm64 (1, 2, 3)

                Diagnostic output for qemu_uefi-arm64, run 3
    L1: " Error: _cluster.go:125: I1106 21:29:57.049588    1561 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.27"
    L2: "cluster.go:125: W1106 21:29:57.215183    1561 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.7, falling back to the nearest etcd version (3.5.7-0)"
    L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.27.7"
    L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.27.7"
    L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.27.7"
    L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.27.7"
    L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.9"
    L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.7-0"
    L9: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.10.1"
    L10: "cluster.go:125: I1106 21:30:13.159275    1720 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.27"
    L11: "cluster.go:125: [init] Using Kubernetes version: v1.27.7"
    L12: "cluster.go:125: [preflight] Running pre-flight checks"
    L13: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L14: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L15: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L16: "cluster.go:125: W1106 21:30:13.531515    1720 checks.go:835] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended that using __registry.k8s.io/pause:3.9__ as the CRI sandbox image."
    L17: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L18: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L19: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L20: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.6?]"
    L21: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L22: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L23: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L28: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L29: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L30: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L31: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L32: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L33: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L34: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L35: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L36: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L37: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L38: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L41: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L42: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L43: "cluster.go:125: [apiclient] All control plane components are healthy after 6.003712 seconds"
    L44: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L45: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L46: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L47: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L48: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L49: "cluster.go:125: [bootstrap-token] Using token: lamt9x.nlnhyc4v7ospps1v"
    L50: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L51: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L52: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L53: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L54: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L55: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L56: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L57: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L58: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L59: "cluster.go:125: "
    L60: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L61: "cluster.go:125: "
    L62: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L63: "cluster.go:125: "
    L64: "cluster.go:125:   mkdir -p $HOME/.kube"
    L65: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L66: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L67: "cluster.go:125: "
    L68: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L69: "cluster.go:125: "
    L70: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L71: "cluster.go:125: "
    L72: "cluster.go:125: You should now deploy a pod network to the cluster."
    L73: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L74: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L75: "cluster.go:125: "
    L76: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L77: "cluster.go:125: "
    L78: "cluster.go:125: kubeadm join 10.0.0.6:6443 --token lamt9x.nlnhyc4v7ospps1v _"
    L79: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:ac4b252eb5f07e80289037b888c7704f8ef351e29a0ef353f16b9b129777fd98 "
    L80: "cluster.go:125: i  Using Cilium version 1.12.5"
    L81: "cluster.go:125: ? Auto-detected cluster name: kubernetes"
    L82: "cluster.go:125: ? Auto-detected datapath mode: tunnel"
    L83: "cluster.go:125: ? Auto-detected kube-proxy has been installed"
    L84: "cluster.go:125: i  helm template --namespace kube-system cilium cilium/cilium --version 1.12.5 --set cluster.id=0,cluster.name=kubernetes,encryption.nodeEncryption=false,extraConfig.cluster-pool-ipv4-?cidr=192.168.0.0/17,extraConfig.enable-endpoint-routes=true,kubeProxyReplacement=disabled,operator.replicas=1,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,tunnel=vx?lan"
    L85: "cluster.go:125: i  Storing helm values file in kube-system/cilium-cli-helm-values Secret"
    L86: "cluster.go:125: ? Created CA in secret cilium-ca"
    L87: "cluster.go:125: ? Generating certificates for Hubble..."
    L88: "cluster.go:125: ? Creating Service accounts..."
    L89: "cluster.go:125: ? Creating Cluster roles..."
    L90: "cluster.go:125: ? Creating ConfigMap for Cilium version 1.12.5..."
    L91: "cluster.go:125: i  Manual overwrite in ConfigMap: enable-endpoint-routes=true"
    L92: "cluster.go:125: i  Manual overwrite in ConfigMap: cluster-pool-ipv4-cidr=192.168.0.0/17"
    L93: "cluster.go:125: ? Creating Agent DaemonSet..."
    L94: "cluster.go:125: ? Creating Operator Deployment..."
    L95: "cluster.go:125: ? Waiting for Cilium to be installed and ready..."
    L96: "cluster.go:125: ? Cilium was successfully installed! Run _cilium status_ to view installation health"
    L97: "cluster.go:125: ?[33m    /??_"
    L98: "cluster.go:125: ?[36m /???[33m___/?[32m??_?[0m    Cilium:         ?[32mOK?[0m"
    L99: "cluster.go:125: ?[36m ___?[31m/??_?[32m__/?[0m    Operator:       ?[32mOK?[0m"
    L100: "cluster.go:125: ?[32m /???[31m___/?[35m??_?[0m    Hubble:         ?[36mdisabled?[0m"
    L101: "cluster.go:125: ?[32m ___?[34m/??_?[35m__/?[0m    ClusterMesh:    ?[36mdisabled?[0m"
    L102: "cluster.go:125: ?[34m    ___/"
    L103: "cluster.go:125: ?[0m"
    L104: "cluster.go:125: Deployment       cilium-operator    "
    L105: "cluster.go:125: DaemonSet        cilium             "
    L106: "cluster.go:125: Containers:      cilium             "
    L107: "cluster.go:125:                  cilium-operator    "
    L108: "cluster.go:125: Cluster Pods:    0/0 managed by Cilium"
    L109: "kubeadm.go:285: unable to setup cluster: unable to create worker node: machine __8aed743f-9e0b-499e-b5c1-279f2b3f0ded__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 10.0.0.8:2?2: connect: no route to host"
    L110: "harness.go:583: Found emergency shell on machine 8aed743f-9e0b-499e-b5c1-279f2b3f0ded console"
    L111: "harness.go:583: Found systemd unit failed to start (?[0;1;39mignition-f???es.service?[0m - Ignition (files). ) on machine 8aed743f-9e0b-499e-b5c1-279f2b3f0ded console"
    L112: "harness.go:583: Found systemd dependency unit failed to start (?[0;1;39migni???0m - Ignition (record completion). ) on machine 8aed743f-9e0b-499e-b5c1-279f2b3f0ded console_"
    L113: " "
                Diagnostic output for qemu_uefi-arm64, run 2
    L1: " Error: _cluster.go:125: I1106 21:18:21.162647    1565 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.27"
    L2: "cluster.go:125: W1106 21:18:21.265597    1565 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.7, falling back to the nearest etcd version (3.5.7-0)"
    L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.27.7"
    L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.27.7"
    L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.27.7"
    L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.27.7"
    L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.9"
    L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.7-0"
    L9: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.10.1"
    L10: "cluster.go:125: I1106 21:18:36.302299    1724 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.27"
    L11: "cluster.go:125: [init] Using Kubernetes version: v1.27.7"
    L12: "cluster.go:125: [preflight] Running pre-flight checks"
    L13: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L14: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L15: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L16: "cluster.go:125: W1106 21:18:36.643792    1724 checks.go:835] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended that using __registry.k8s.io/pause:3.9__ as the CRI sandbox image."
    L17: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L18: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L19: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L20: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.1?4]"
    L21: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L22: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L23: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L28: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L29: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L30: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L31: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L32: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L33: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L34: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L35: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L36: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L37: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L38: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L41: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L42: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L43: "cluster.go:125: [apiclient] All control plane components are healthy after 7.003030 seconds"
    L44: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L45: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L46: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L47: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L48: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L49: "cluster.go:125: [bootstrap-token] Using token: bh3z2l.dq3pkfp7tb3lauc7"
    L50: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L51: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L52: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L53: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L54: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L55: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L56: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L57: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L58: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L59: "cluster.go:125: "
    L60: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L61: "cluster.go:125: "
    L62: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L63: "cluster.go:125: "
    L64: "cluster.go:125:   mkdir -p $HOME/.kube"
    L65: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L66: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L67: "cluster.go:125: "
    L68: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L69: "cluster.go:125: "
    L70: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L71: "cluster.go:125: "
    L72: "cluster.go:125: You should now deploy a pod network to the cluster."
    L73: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L74: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L75: "cluster.go:125: "
    L76: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L77: "cluster.go:125: "
    L78: "cluster.go:125: kubeadm join 10.0.0.14:6443 --token bh3z2l.dq3pkfp7tb3lauc7 _"
    L79: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:2a2e1e88053f6e9dc9a363a02fb0d4d37a293e84bf6455e371176d6910bcc854 "
    L80: "cluster.go:125: i  Using Cilium version 1.12.5"
    L81: "cluster.go:125: ? Auto-detected cluster name: kubernetes"
    L82: "cluster.go:125: ? Auto-detected datapath mode: tunnel"
    L83: "cluster.go:125: ? Auto-detected kube-proxy has been installed"
    L84: "cluster.go:125: i  helm template --namespace kube-system cilium cilium/cilium --version 1.12.5 --set cluster.id=0,cluster.name=kubernetes,encryption.nodeEncryption=false,extraConfig.cluster-pool-ipv4-?cidr=192.168.0.0/17,extraConfig.enable-endpoint-routes=true,kubeProxyReplacement=disabled,operator.replicas=1,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,tunnel=vx?lan"
    L85: "cluster.go:125: i  Storing helm values file in kube-system/cilium-cli-helm-values Secret"
    L86: "cluster.go:125: ? Created CA in secret cilium-ca"
    L87: "cluster.go:125: ? Generating certificates for Hubble..."
    L88: "cluster.go:125: ? Creating Service accounts..."
    L89: "cluster.go:125: ? Creating Cluster roles..."
    L90: "cluster.go:125: ? Creating ConfigMap for Cilium version 1.12.5..."
    L91: "cluster.go:125: i  Manual overwrite in ConfigMap: cluster-pool-ipv4-cidr=192.168.0.0/17"
    L92: "cluster.go:125: i  Manual overwrite in ConfigMap: enable-endpoint-routes=true"
    L93: "cluster.go:125: ? Creating Agent DaemonSet..."
    L94: "cluster.go:125: ? Creating Operator Deployment..."
    L95: "cluster.go:125: ? Waiting for Cilium to be installed and ready..."
    L96: "cluster.go:125: ? Cilium was successfully installed! Run _cilium status_ to view installation health"
    L97: "cluster.go:125: ?[33m    /??_"
    L98: "cluster.go:125: ?[36m /???[33m___/?[32m??_?[0m    Cilium:         ?[32mOK?[0m"
    L99: "cluster.go:125: ?[36m ___?[31m/??_?[32m__/?[0m    Operator:       ?[32mOK?[0m"
    L100: "cluster.go:125: ?[32m /???[31m___/?[35m??_?[0m    Hubble:         ?[36mdisabled?[0m"
    L101: "cluster.go:125: ?[32m ___?[34m/??_?[35m__/?[0m    ClusterMesh:    ?[36mdisabled?[0m"
    L102: "cluster.go:125: ?[34m    ___/"
    L103: "cluster.go:125: ?[0m"
    L104: "cluster.go:125: Deployment       cilium-operator    "
    L105: "cluster.go:125: DaemonSet        cilium             "
    L106: "cluster.go:125: Containers:      cilium-operator    "
    L107: "cluster.go:125:                  cilium             "
    L108: "cluster.go:125: Cluster Pods:    0/0 managed by Cilium"
    L109: "cluster.go:125: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L110: "harness.go:583: Found emergency shell on machine 026e7a98-5327-4325-a576-c7d2f0310fb3 console"
    L111: "harness.go:583: Found systemd unit failed to start (?[0;1;39mignition-f???es.service?[0m - Ignition (files). ) on machine 026e7a98-5327-4325-a576-c7d2f0310fb3 console"
    L112: "harness.go:583: Found systemd dependency unit failed to start (?[0;1;39migni???te.target?[0m - Ignition Complete. ) on machine 026e7a98-5327-4325-a576-c7d2f0310fb3 console_"
    L113: " "
                Diagnostic output for qemu_uefi-arm64, run 1
    L1: "  "
    L2: " Error: _cluster.go:125: I1106 20:53:50.143423    1572 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.27"
    L3: "cluster.go:125: W1106 20:53:50.286341    1572 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.7, falling back to the nearest etcd version (3.5.7-0)"
    L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.27.7"
    L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.27.7"
    L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.27.7"
    L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.27.7"
    L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.9"
    L9: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.7-0"
    L10: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.10.1"
    L11: "cluster.go:125: I1106 20:54:06.906433    1731 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.27"
    L12: "cluster.go:125: [init] Using Kubernetes version: v1.27.7"
    L13: "cluster.go:125: [preflight] Running pre-flight checks"
    L14: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L15: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L16: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L17: "cluster.go:125: W1106 20:54:07.305453    1731 checks.go:835] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended that using __registry.k8s.io/pause:3.9__ as the CRI sandbox image."
    L18: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L19: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L20: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L21: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.2?3]"
    L22: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L23: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L24: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L29: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L30: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L31: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L32: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L33: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L34: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L35: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L36: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L37: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L38: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L39: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L41: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L42: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L43: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L44: "cluster.go:125: [apiclient] All control plane components are healthy after 5.505461 seconds"
    L45: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L46: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L47: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L48: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L49: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L50: "cluster.go:125: [bootstrap-token] Using token: lk4th0.j3u7v3y1lc9efv7u"
    L51: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L52: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L53: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L54: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L55: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L56: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L57: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L58: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L59: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L60: "cluster.go:125: "
    L61: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L62: "cluster.go:125: "
    L63: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L64: "cluster.go:125: "
    L65: "cluster.go:125:   mkdir -p $HOME/.kube"
    L66: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L67: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L68: "cluster.go:125: "
    L69: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L70: "cluster.go:125: "
    L71: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L72: "cluster.go:125: "
    L73: "cluster.go:125: You should now deploy a pod network to the cluster."
    L74: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L75: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L76: "cluster.go:125: "
    L77: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L78: "cluster.go:125: "
    L79: "cluster.go:125: kubeadm join 10.0.0.23:6443 --token lk4th0.j3u7v3y1lc9efv7u _"
    L80: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:34b72db4e353f0a3c5ec1d30a9fedc32c83859b09fff1bf204f20f151fd24c64 "
    L81: "cluster.go:125: i  Using Cilium version 1.12.5"
    L82: "cluster.go:125: ? Auto-detected cluster name: kubernetes"
    L83: "cluster.go:125: ? Auto-detected datapath mode: tunnel"
    L84: "cluster.go:125: ? Auto-detected kube-proxy has been installed"
    L85: "cluster.go:125: i  helm template --namespace kube-system cilium cilium/cilium --version 1.12.5 --set cluster.id=0,cluster.name=kubernetes,encryption.nodeEncryption=false,extraConfig.cluster-pool-ipv4-?cidr=192.168.0.0/17,extraConfig.enable-endpoint-routes=true,kubeProxyReplacement=disabled,operator.replicas=1,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,tunnel=vx?lan"
    L86: "cluster.go:125: i  Storing helm values file in kube-system/cilium-cli-helm-values Secret"
    L87: "cluster.go:125: ? Created CA in secret cilium-ca"
    L88: "cluster.go:125: ? Generating certificates for Hubble..."
    L89: "cluster.go:125: ? Creating Service accounts..."
    L90: "cluster.go:125: ? Creating Cluster roles..."
    L91: "cluster.go:125: ? Creating ConfigMap for Cilium version 1.12.5..."
    L92: "cluster.go:125: i  Manual overwrite in ConfigMap: enable-endpoint-routes=true"
    L93: "cluster.go:125: i  Manual overwrite in ConfigMap: cluster-pool-ipv4-cidr=192.168.0.0/17"
    L94: "cluster.go:125: ? Creating Agent DaemonSet..."
    L95: "cluster.go:125: ? Creating Operator Deployment..."
    L96: "cluster.go:125: ? Waiting for Cilium to be installed and ready..."
    L97: "cluster.go:125: ? Cilium was successfully installed! Run _cilium status_ to view installation health"
    L98: "cluster.go:125: ?[33m    /??_"
    L99: "cluster.go:125: ?[36m /???[33m___/?[32m??_?[0m    Cilium:         ?[32mOK?[0m"
    L100: "cluster.go:125: ?[36m ___?[31m/??_?[32m__/?[0m    Operator:       ?[32mOK?[0m"
    L101: "cluster.go:125: ?[32m /???[31m___/?[35m??_?[0m    Hubble:         ?[36mdisabled?[0m"
    L102: "cluster.go:125: ?[32m ___?[34m/??_?[35m__/?[0m    ClusterMesh:    ?[36mdisabled?[0m"
    L103: "cluster.go:125: ?[34m    ___/"
    L104: "cluster.go:125: ?[0m"
    L105: "cluster.go:125: Deployment       cilium-operator    "
    L106: "cluster.go:125: DaemonSet        cilium             "
    L107: "cluster.go:125: Containers:      cilium             "
    L108: "cluster.go:125:                  cilium-operator    "
    L109: "cluster.go:125: Cluster Pods:    0/0 managed by Cilium"
    L110: "cluster.go:125: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L111: "harness.go:583: Found emergency shell on machine 0d92f8d3-bacd-4df6-8b96-fc1d6acb188e console"
    L112: "harness.go:583: Found systemd unit failed to start (?[0;1;39mignition-f???es.service?[0m - Ignition (files). ) on machine 0d92f8d3-bacd-4df6-8b96-fc1d6acb188e console"
    L113: "harness.go:583: Found systemd dependency unit failed to start (?[0;1;39migni???0m - Ignition (record completion). ) on machine 0d92f8d3-bacd-4df6-8b96-fc1d6acb188e console_"
    L114: " "

ok kubeadm.v1.27.2.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (4) ❌ Failed: qemu_uefi-arm64 (1, 2, 3)

                Diagnostic output for qemu_uefi-arm64, run 3
    L1: " Error: _cluster.go:125: I1106 21:35:11.038214    1526 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.27"
    L2: "cluster.go:125: W1106 21:35:11.267823    1526 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.7, falling back to the nearest etcd version (3.5.7-0)"
    L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.27.7"
    L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.27.7"
    L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.27.7"
    L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.27.7"
    L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.9"
    L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.7-0"
    L9: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.10.1"
    L10: "cluster.go:125: I1106 21:35:26.059268    1690 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.27"
    L11: "cluster.go:125: [init] Using Kubernetes version: v1.27.7"
    L12: "cluster.go:125: [preflight] Running pre-flight checks"
    L13: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L14: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L15: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L16: "cluster.go:125: W1106 21:35:26.420611    1690 checks.go:835] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended that using __registry.k8s.io/pause:3.9__ as the CRI sandbox image."
    L17: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L18: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L19: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L20: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.5?]"
    L21: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L22: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L23: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L28: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L29: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L30: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L31: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L32: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L33: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L34: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L35: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L36: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L37: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L38: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L41: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L42: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L43: "cluster.go:125: [apiclient] All control plane components are healthy after 6.002754 seconds"
    L44: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L45: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L46: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L47: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L48: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L49: "cluster.go:125: [bootstrap-token] Using token: o9g9eh.p2twsqcmazhd997w"
    L50: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L51: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L52: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L53: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L54: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L55: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L56: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L57: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L58: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L59: "cluster.go:125: "
    L60: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L61: "cluster.go:125: "
    L62: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L63: "cluster.go:125: "
    L64: "cluster.go:125:   mkdir -p $HOME/.kube"
    L65: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L66: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L67: "cluster.go:125: "
    L68: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L69: "cluster.go:125: "
    L70: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L71: "cluster.go:125: "
    L72: "cluster.go:125: You should now deploy a pod network to the cluster."
    L73: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L74: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L75: "cluster.go:125: "
    L76: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L77: "cluster.go:125: "
    L78: "cluster.go:125: kubeadm join 10.0.0.5:6443 --token o9g9eh.p2twsqcmazhd997w _"
    L79: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:986352a29bc772e6087b43b5ce2370fa96f705573586df1bbdb2fafb9a8082a0 "
    L80: "cluster.go:125: namespace/kube-flannel created"
    L81: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/flannel created"
    L82: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/flannel created"
    L83: "cluster.go:125: serviceaccount/flannel created"
    L84: "cluster.go:125: configmap/kube-flannel-cfg created"
    L85: "cluster.go:125: daemonset.apps/kube-flannel-ds created"
    L86: "cluster.go:125: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L87: "harness.go:583: Found emergency shell on machine d97a91b4-4958-456f-a29d-71fbaee90cae console"
    L88: "harness.go:583: Found systemd unit failed to start (?[0;1;39mignition-f???es.service?[0m - Ignition (files). ) on machine d97a91b4-4958-456f-a29d-71fbaee90cae console"
    L89: "harness.go:583: Found systemd dependency unit failed to start (?[0;1;39migni???te.target?[0m - Ignition Complete. ) on machine d97a91b4-4958-456f-a29d-71fbaee90cae console"
    L90: "harness.go:583: Found emergency shell on machine 918926a8-7d1f-4a5b-aa6f-497e408d3dcf console"
    L91: "harness.go:583: Found systemd unit failed to start (?[0;1;39mignition-f???es.service?[0m - Ignition (files). ) on machine 918926a8-7d1f-4a5b-aa6f-497e408d3dcf console"
    L92: "harness.go:583: Found systemd dependency unit failed to start (?[0;1;39migni???0m - Ignition (record completion). ) on machine 918926a8-7d1f-4a5b-aa6f-497e408d3dcf console_"
    L93: " "
                Diagnostic output for qemu_uefi-arm64, run 2
    L1: " Error: _cluster.go:125: I1106 21:18:20.512228    1526 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.27"
    L2: "cluster.go:125: W1106 21:18:20.613634    1526 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.7, falling back to the nearest etcd version (3.5.7-0)"
    L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.27.7"
    L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.27.7"
    L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.27.7"
    L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.27.7"
    L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.9"
    L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.7-0"
    L9: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.10.1"
    L10: "cluster.go:125: I1106 21:18:36.453748    1684 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.27"
    L11: "cluster.go:125: [init] Using Kubernetes version: v1.27.7"
    L12: "cluster.go:125: [preflight] Running pre-flight checks"
    L13: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L14: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L15: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L16: "cluster.go:125: W1106 21:18:36.818373    1684 checks.go:835] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended that using __registry.k8s.io/pause:3.9__ as the CRI sandbox image."
    L17: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L18: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L19: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L20: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.9?]"
    L21: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L22: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L23: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L28: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L29: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L30: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L31: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L32: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L33: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L34: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L35: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L36: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L37: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L38: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L41: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L42: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L43: "cluster.go:125: [apiclient] All control plane components are healthy after 6.503338 seconds"
    L44: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L45: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L46: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L47: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L48: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L49: "cluster.go:125: [bootstrap-token] Using token: z4rxo4.jqp3yolzvwufco76"
    L50: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L51: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L52: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L53: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L54: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L55: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L56: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L57: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L58: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L59: "cluster.go:125: "
    L60: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L61: "cluster.go:125: "
    L62: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L63: "cluster.go:125: "
    L64: "cluster.go:125:   mkdir -p $HOME/.kube"
    L65: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L66: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L67: "cluster.go:125: "
    L68: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L69: "cluster.go:125: "
    L70: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L71: "cluster.go:125: "
    L72: "cluster.go:125: You should now deploy a pod network to the cluster."
    L73: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L74: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L75: "cluster.go:125: "
    L76: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L77: "cluster.go:125: "
    L78: "cluster.go:125: kubeadm join 10.0.0.9:6443 --token z4rxo4.jqp3yolzvwufco76 _"
    L79: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:01e2009df3f22299ec0712c313c5e7c42bffc439818028feb148aac8bb3a34e5 "
    L80: "cluster.go:125: namespace/kube-flannel created"
    L81: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/flannel created"
    L82: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/flannel created"
    L83: "cluster.go:125: serviceaccount/flannel created"
    L84: "cluster.go:125: configmap/kube-flannel-cfg created"
    L85: "cluster.go:125: daemonset.apps/kube-flannel-ds created"
    L86: "kubeadm.go:285: unable to setup cluster: unable to create worker node: machine __595b111c-be8f-472d-b407-b2d898296cb3__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 10.0.0.19:?22: connect: no route to host"
    L87: "harness.go:583: Found emergency shell on machine 02ff29ed-d67e-4640-8072-f65da57a4faf console"
    L88: "harness.go:583: Found systemd unit failed to start (?[0;1;39mignition-f???es.service?[0m - Ignition (files). ) on machine 02ff29ed-d67e-4640-8072-f65da57a4faf console"
    L89: "harness.go:583: Found systemd dependency unit failed to start (?[0;1;39migni???te.target?[0m - Ignition Complete. ) on machine 02ff29ed-d67e-4640-8072-f65da57a4faf console"
    L90: "harness.go:583: Found emergency shell on machine 595b111c-be8f-472d-b407-b2d898296cb3 console"
    L91: "harness.go:583: Found systemd unit failed to start (?[0;1;39mignition-f???es.service?[0m - Ignition (files). ) on machine 595b111c-be8f-472d-b407-b2d898296cb3 console"
    L92: "harness.go:583: Found systemd dependency unit failed to start (?[0;1;39migni???te.target?[0m - Ignition Complete. ) on machine 595b111c-be8f-472d-b407-b2d898296cb3 console_"
    L93: " "
                Diagnostic output for qemu_uefi-arm64, run 1
    L1: "  "
    L2: " Error: _cluster.go:125: I1106 20:55:26.655472    1533 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.27"
    L3: "cluster.go:125: W1106 20:55:26.943660    1533 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.7, falling back to the nearest etcd version (3.5.7-0)"
    L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.27.7"
    L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.27.7"
    L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.27.7"
    L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.27.7"
    L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.9"
    L9: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.7-0"
    L10: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.10.1"
    L11: "cluster.go:125: I1106 20:55:46.147357    1693 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.27"
    L12: "cluster.go:125: [init] Using Kubernetes version: v1.27.7"
    L13: "cluster.go:125: [preflight] Running pre-flight checks"
    L14: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L15: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L16: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L17: "cluster.go:125: W1106 20:55:46.557178    1693 checks.go:835] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended that using __registry.k8s.io/pause:3.9__ as the CRI sandbox image."
    L18: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L19: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L20: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L21: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.4?9]"
    L22: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L23: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L24: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L29: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L30: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L31: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L32: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L33: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L34: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L35: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L36: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L37: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L38: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L39: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L41: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L42: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L43: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L44: "cluster.go:125: [apiclient] All control plane components are healthy after 6.003449 seconds"
    L45: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L46: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L47: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L48: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L49: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L50: "cluster.go:125: [bootstrap-token] Using token: 4rox92.om31q5k3ggg89bsh"
    L51: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L52: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L53: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L54: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L55: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L56: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L57: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L58: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L59: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L60: "cluster.go:125: "
    L61: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L62: "cluster.go:125: "
    L63: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L64: "cluster.go:125: "
    L65: "cluster.go:125:   mkdir -p $HOME/.kube"
    L66: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L67: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L68: "cluster.go:125: "
    L69: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L70: "cluster.go:125: "
    L71: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L72: "cluster.go:125: "
    L73: "cluster.go:125: You should now deploy a pod network to the cluster."
    L74: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L75: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L76: "cluster.go:125: "
    L77: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L78: "cluster.go:125: "
    L79: "cluster.go:125: kubeadm join 10.0.0.49:6443 --token 4rox92.om31q5k3ggg89bsh _"
    L80: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:6068d304fac849b7064b51b9df102c63ec253d2b00e07471d28e739aa4e9e037 "
    L81: "cluster.go:125: namespace/kube-flannel created"
    L82: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/flannel created"
    L83: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/flannel created"
    L84: "cluster.go:125: serviceaccount/flannel created"
    L85: "cluster.go:125: configmap/kube-flannel-cfg created"
    L86: "cluster.go:125: daemonset.apps/kube-flannel-ds created"
    L87: "cluster.go:125: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L88: "harness.go:583: Found emergency shell on machine cb136ba9-a4a3-4f1b-92d6-25b8dd80c003 console"
    L89: "harness.go:583: Found systemd unit failed to start (?[0;1;39mignition-f???es.service?[0m - Ignition (files). ) on machine cb136ba9-a4a3-4f1b-92d6-25b8dd80c003 console"
    L90: "harness.go:583: Found systemd dependency unit failed to start (?[0;1;39migni???te.target?[0m - Ignition Complete. ) on machine cb136ba9-a4a3-4f1b-92d6-25b8dd80c003 console_"
    L91: " "

ok kubeadm.v1.28.1.calico.base 🟢 Succeeded: qemu_uefi-amd64 (2); qemu_uefi-arm64 (3) ❌ Failed: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1, 2)

                Diagnostic output for qemu_uefi-arm64, run 2
    L1: "  "
    L2: " Error: _cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.28.3"
    L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.28.3"
    L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.28.3"
    L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.28.3"
    L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.9"
    L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.9-0"
    L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.10.1"
    L9: "cluster.go:125: [init] Using Kubernetes version: v1.28.3"
    L10: "cluster.go:125: [preflight] Running pre-flight checks"
    L11: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L12: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L13: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L14: "cluster.go:125: W1106 21:13:26.677703    1763 checks.go:835] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended that using __registry.k8s.io/pause:3.9__ as the CRI sandbox image."
    L15: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L16: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L17: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L18: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.1?2]"
    L19: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L20: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L21: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L22: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L23: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L26: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L27: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L28: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L29: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L30: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L31: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L32: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L33: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L34: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L35: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L36: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L37: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L38: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L39: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L40: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L41: "cluster.go:125: [apiclient] All control plane components are healthy after 5.502657 seconds"
    L42: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L43: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L44: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L45: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L46: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L47: "cluster.go:125: [bootstrap-token] Using token: wvnsah.7zfgwni1eoweqpma"
    L48: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L49: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L50: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L51: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L52: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L53: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L54: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L55: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L56: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L57: "cluster.go:125: "
    L58: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L59: "cluster.go:125: "
    L60: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L61: "cluster.go:125: "
    L62: "cluster.go:125:   mkdir -p $HOME/.kube"
    L63: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L64: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L65: "cluster.go:125: "
    L66: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L67: "cluster.go:125: "
    L68: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L69: "cluster.go:125: "
    L70: "cluster.go:125: You should now deploy a pod network to the cluster."
    L71: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L72: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L73: "cluster.go:125: "
    L74: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L75: "cluster.go:125: "
    L76: "cluster.go:125: kubeadm join 10.0.0.12:6443 --token wvnsah.7zfgwni1eoweqpma _"
    L77: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:beabde8577709e3e303a62b4c10dbb339f50ff0315fae182c14fb3bfbc0ce168 "
    L78: "cluster.go:125: namespace/tigera-operator created"
    L79: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created"
    L80: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created"
    L81: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created"
    L82: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created"
    L83: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created"
    L84: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created"
    L85: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created"
    L86: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created"
    L87: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created"
    L88: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created"
    L89: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created"
    L90: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created"
    L91: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created"
    L92: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created"
    L93: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created"
    L94: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created"
    L95: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created"
    L96: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created"
    L97: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created"
    L98: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created"
    L99: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created"
    L100: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created"
    L101: "cluster.go:125: serviceaccount/tigera-operator created"
    L102: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
    L103: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
    L104: "cluster.go:125: deployment.apps/tigera-operator created"
    L105: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io condition met"
    L106: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io condition met"
    L107: "cluster.go:125: installation.operator.tigera.io/default created"
    L108: "cluster.go:125: apiserver.operator.tigera.io/default created"
    L109: "kubeadm.go:285: unable to setup cluster: unable to create worker node: machine __3e97fddb-1e96-4210-b3a9-d301c1cc535d__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 10.0.0.17:?22: connect: connection refused_"
    L110: " "
                Diagnostic output for qemu_uefi-arm64, run 1
    L1: " Error: _cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.28.3"
    L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.28.3"
    L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.28.3"
    L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.28.3"
    L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.9"
    L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.9-0"
    L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.10.1"
    L8: "cluster.go:125: [init] Using Kubernetes version: v1.28.3"
    L9: "cluster.go:125: [preflight] Running pre-flight checks"
    L10: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L11: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L12: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L13: "cluster.go:125: W1106 20:53:50.272822    1803 checks.go:835] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended that using __registry.k8s.io/pause:3.9__ as the CRI sandbox image."
    L14: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L15: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L16: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L17: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.9?2]"
    L18: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L19: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L20: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L21: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L22: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L23: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L25: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L26: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L27: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L28: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L29: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L30: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L31: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L32: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L33: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L34: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L35: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L36: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L37: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L38: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L39: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L40: "cluster.go:125: [apiclient] All control plane components are healthy after 4.501338 seconds"
    L41: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L42: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L43: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L44: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L45: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L46: "cluster.go:125: [bootstrap-token] Using token: 31p05q.ythufhdzf5937rw7"
    L47: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L48: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L49: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L50: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L51: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L52: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L53: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L54: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L55: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L56: "cluster.go:125: "
    L57: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L58: "cluster.go:125: "
    L59: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L60: "cluster.go:125: "
    L61: "cluster.go:125:   mkdir -p $HOME/.kube"
    L62: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L63: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L64: "cluster.go:125: "
    L65: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L66: "cluster.go:125: "
    L67: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L68: "cluster.go:125: "
    L69: "cluster.go:125: You should now deploy a pod network to the cluster."
    L70: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L71: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L72: "cluster.go:125: "
    L73: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L74: "cluster.go:125: "
    L75: "cluster.go:125: kubeadm join 10.0.0.92:6443 --token 31p05q.ythufhdzf5937rw7 _"
    L76: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:fff6b6bd986c483e65d929717b21cc358b2ffe23370a00301aa4d6934d62710a "
    L77: "cluster.go:125: namespace/tigera-operator created"
    L78: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created"
    L79: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created"
    L80: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created"
    L81: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created"
    L82: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created"
    L83: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created"
    L84: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created"
    L85: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created"
    L86: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created"
    L87: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created"
    L88: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created"
    L89: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created"
    L90: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created"
    L91: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created"
    L92: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created"
    L93: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created"
    L94: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created"
    L95: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created"
    L96: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created"
    L97: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created"
    L98: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created"
    L99: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created"
    L100: "cluster.go:125: serviceaccount/tigera-operator created"
    L101: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
    L102: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
    L103: "cluster.go:125: deployment.apps/tigera-operator created"
    L104: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io condition met"
    L105: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io condition met"
    L106: "cluster.go:125: installation.operator.tigera.io/default created"
    L107: "cluster.go:125: apiserver.operator.tigera.io/default created"
    L108: "cluster.go:125: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L109: "--- FAIL: kubeadm.v1.28.1.calico.base/nginx_deployment (182.03s)"
    L110: "kubeadm.go:320: nginx is not deployed: ready replicas should be equal to 1: null_"
    L111: " "
    L112: " Error: _cluster.go:125: W1106 20:58:52.443899    1581 version.go:104] could not fetch a Kubernetes version from the internet: unable to get URL __https://dl.k8s.io/release/stable-1.txt__: Get __https:?//cdn.dl.k8s.io/release/stable-1.txt__: context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
    L113: "cluster.go:125: W1106 20:58:52.443978    1581 version.go:105] falling back to the local client version: v1.28.1"
    L114: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.28.1"
    L115: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.28.1"
    L116: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.28.1"
    L117: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.28.1"
    L118: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.9"
    L119: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.9-0"
    L120: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.10.1"
    L121: "cluster.go:125: [init] Using Kubernetes version: v1.28.3"
    L122: "cluster.go:125: [preflight] Running pre-flight checks"
    L123: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L124: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L125: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L126: "cluster.go:125: W1106 20:59:18.524612    1739 checks.go:835] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended that using __registry.k8s.io/pause:3.9__ as the CRI sandbox image."
    L127: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L128: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L129: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L130: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.1?27]"
    L131: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L132: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L133: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L134: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L135: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L136: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L137: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L138: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L139: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L140: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L141: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L142: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L143: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L144: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L145: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L146: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L147: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L148: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L149: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L150: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L151: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L152: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L153: "cluster.go:125: [apiclient] All control plane components are healthy after 6.004093 seconds"
    L154: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L155: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L156: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L157: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L158: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L159: "cluster.go:125: [bootstrap-token] Using token: ijfs9g.wmzxjamrp5ifajtk"
    L160: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L161: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L162: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L163: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L164: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L165: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L166: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L167: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L168: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L169: "cluster.go:125: "
    L170: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L171: "cluster.go:125: "
    L172: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L173: "cluster.go:125: "
    L174: "cluster.go:125:   mkdir -p $HOME/.kube"
    L175: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L176: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L177: "cluster.go:125: "
    L178: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L179: "cluster.go:125: "
    L180: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L181: "cluster.go:125: "
    L182: "cluster.go:125: You should now deploy a pod network to the cluster."
    L183: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L184: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L185: "cluster.go:125: "
    L186: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L187: "cluster.go:125: "
    L188: "cluster.go:125: kubeadm join 10.0.0.127:6443 --token ijfs9g.wmzxjamrp5ifajtk _"
    L189: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:556a6f925ef706a75051a28086cab0973b6a1846647c61302ec662252195ff69 "
    L190: "cluster.go:125: namespace/tigera-operator created"
    L191: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created"
    L192: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created"
    L193: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created"
    L194: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created"
    L195: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created"
    L196: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created"
    L197: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created"
    L198: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created"
    L199: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created"
    L200: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created"
    L201: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created"
    L202: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created"
    L203: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created"
    L204: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created"
    L205: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created"
    L206: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created"
    L207: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created"
    L208: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created"
    L209: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created"
    L210: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created"
    L211: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created"
    L212: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created"
    L213: "cluster.go:125: serviceaccount/tigera-operator created"
    L214: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
    L215: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
    L216: "cluster.go:125: deployment.apps/tigera-operator created"
    L217: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io condition met"
    L218: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io condition met"
    L219: "cluster.go:125: installation.operator.tigera.io/default created"
    L220: "cluster.go:125: apiserver.operator.tigera.io/default created"
    L221: "cluster.go:125: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L222: "--- FAIL: kubeadm.v1.28.1.calico.base/nginx_deployment (184.36s)"
    L223: "kubeadm.go:320: nginx is not deployed: ready replicas should be equal to 1: null_"
    L224: " "
                Diagnostic output for qemu_uefi-amd64, run 1
    L1: " Error: _cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.28.3"
    L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.28.3"
    L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.28.3"
    L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.28.3"
    L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.9"
    L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.9-0"
    L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.10.1"
    L8: "cluster.go:125: [init] Using Kubernetes version: v1.28.3"
    L9: "cluster.go:125: [preflight] Running pre-flight checks"
    L10: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L11: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L12: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L13: "cluster.go:125: W1106 20:53:50.272822    1803 checks.go:835] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended that using __registry.k8s.io/pause:3.9__ as the CRI sandbox image."
    L14: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L15: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L16: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L17: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.9?2]"
    L18: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L19: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L20: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L21: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L22: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L23: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L25: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L26: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L27: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L28: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L29: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L30: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L31: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L32: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L33: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L34: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L35: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L36: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L37: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L38: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L39: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L40: "cluster.go:125: [apiclient] All control plane components are healthy after 4.501338 seconds"
    L41: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L42: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L43: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L44: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L45: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L46: "cluster.go:125: [bootstrap-token] Using token: 31p05q.ythufhdzf5937rw7"
    L47: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L48: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L49: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L50: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L51: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L52: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L53: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L54: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L55: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L56: "cluster.go:125: "
    L57: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L58: "cluster.go:125: "
    L59: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L60: "cluster.go:125: "
    L61: "cluster.go:125:   mkdir -p $HOME/.kube"
    L62: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L63: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L64: "cluster.go:125: "
    L65: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L66: "cluster.go:125: "
    L67: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L68: "cluster.go:125: "
    L69: "cluster.go:125: You should now deploy a pod network to the cluster."
    L70: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L71: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L72: "cluster.go:125: "
    L73: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L74: "cluster.go:125: "
    L75: "cluster.go:125: kubeadm join 10.0.0.92:6443 --token 31p05q.ythufhdzf5937rw7 _"
    L76: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:fff6b6bd986c483e65d929717b21cc358b2ffe23370a00301aa4d6934d62710a "
    L77: "cluster.go:125: namespace/tigera-operator created"
    L78: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created"
    L79: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created"
    L80: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created"
    L81: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created"
    L82: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created"
    L83: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created"
    L84: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created"
    L85: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created"
    L86: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created"
    L87: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created"
    L88: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created"
    L89: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created"
    L90: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created"
    L91: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created"
    L92: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created"
    L93: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created"
    L94: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created"
    L95: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created"
    L96: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created"
    L97: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created"
    L98: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created"
    L99: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created"
    L100: "cluster.go:125: serviceaccount/tigera-operator created"
    L101: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
    L102: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
    L103: "cluster.go:125: deployment.apps/tigera-operator created"
    L104: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io condition met"
    L105: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io condition met"
    L106: "cluster.go:125: installation.operator.tigera.io/default created"
    L107: "cluster.go:125: apiserver.operator.tigera.io/default created"
    L108: "cluster.go:125: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L109: "--- FAIL: kubeadm.v1.28.1.calico.base/nginx_deployment (182.03s)"
    L110: "kubeadm.go:320: nginx is not deployed: ready replicas should be equal to 1: null_"
    L111: " "
    L112: " Error: _cluster.go:125: W1106 20:58:52.443899    1581 version.go:104] could not fetch a Kubernetes version from the internet: unable to get URL __https://dl.k8s.io/release/stable-1.txt__: Get __https:?//cdn.dl.k8s.io/release/stable-1.txt__: context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
    L113: "cluster.go:125: W1106 20:58:52.443978    1581 version.go:105] falling back to the local client version: v1.28.1"
    L114: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.28.1"
    L115: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.28.1"
    L116: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.28.1"
    L117: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.28.1"
    L118: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.9"
    L119: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.9-0"
    L120: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.10.1"
    L121: "cluster.go:125: [init] Using Kubernetes version: v1.28.3"
    L122: "cluster.go:125: [preflight] Running pre-flight checks"
    L123: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L124: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L125: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L126: "cluster.go:125: W1106 20:59:18.524612    1739 checks.go:835] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended that using __registry.k8s.io/pause:3.9__ as the CRI sandbox image."
    L127: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L128: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L129: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L130: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.1?27]"
    L131: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L132: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L133: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L134: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L135: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L136: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L137: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L138: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L139: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L140: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L141: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L142: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L143: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L144: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L145: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L146: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L147: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L148: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L149: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L150: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L151: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L152: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L153: "cluster.go:125: [apiclient] All control plane components are healthy after 6.004093 seconds"
    L154: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L155: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L156: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L157: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L158: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L159: "cluster.go:125: [bootstrap-token] Using token: ijfs9g.wmzxjamrp5ifajtk"
    L160: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L161: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L162: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L163: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L164: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L165: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L166: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L167: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L168: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L169: "cluster.go:125: "
    L170: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L171: "cluster.go:125: "
    L172: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L173: "cluster.go:125: "
    L174: "cluster.go:125:   mkdir -p $HOME/.kube"
    L175: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L176: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L177: "cluster.go:125: "
    L178: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L179: "cluster.go:125: "
    L180: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L181: "cluster.go:125: "
    L182: "cluster.go:125: You should now deploy a pod network to the cluster."
    L183: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L184: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L185: "cluster.go:125: "
    L186: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L187: "cluster.go:125: "
    L188: "cluster.go:125: kubeadm join 10.0.0.127:6443 --token ijfs9g.wmzxjamrp5ifajtk _"
    L189: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:556a6f925ef706a75051a28086cab0973b6a1846647c61302ec662252195ff69 "
    L190: "cluster.go:125: namespace/tigera-operator created"
    L191: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created"
    L192: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created"
    L193: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created"
    L194: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created"
    L195: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created"
    L196: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created"
    L197: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created"
    L198: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created"
    L199: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created"
    L200: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created"
    L201: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created"
    L202: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created"
    L203: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created"
    L204: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created"
    L205: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created"
    L206: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created"
    L207: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created"
    L208: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created"
    L209: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created"
    L210: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created"
    L211: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created"
    L212: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created"
    L213: "cluster.go:125: serviceaccount/tigera-operator created"
    L214: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
    L215: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
    L216: "cluster.go:125: deployment.apps/tigera-operator created"
    L217: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io condition met"
    L218: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io condition met"
    L219: "cluster.go:125: installation.operator.tigera.io/default created"
    L220: "cluster.go:125: apiserver.operator.tigera.io/default created"
    L221: "cluster.go:125: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L222: "--- FAIL: kubeadm.v1.28.1.calico.base/nginx_deployment (184.36s)"
    L223: "kubeadm.go:320: nginx is not deployed: ready replicas should be equal to 1: null_"
    L224: " "

ok kubeadm.v1.28.1.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.28.1.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok linux.nfs.v3 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok linux.nfs.v4 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok linux.ntp 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok misc.fips 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok packages 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.custom-docker.sysext 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.custom-oem 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.disable-containerd 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.disable-docker 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.simple 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.journal.remote 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.journal.user 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.sysusers.gshadow 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

Copy link
Member

@t-lo t-lo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, all tests are green. Thank you Kai!

This pulls in
flatcar/bootengine#77 to not try to run Torcx
when /etc/torcx/next-profile exists.
@pothos pothos merged commit c35b486 into main Nov 7, 2023
1 check failed
@pothos pothos deleted the kai/bootengine-no-torcx branch November 7, 2023 10:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants