You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
TL;DR: Kubernetes 1.16 & 1.17 are broken and will not be fixed because they are EOL (as of June 2021), the effort to investigate the issue is too high and is not worth it. The lowest version that works is 1.18.
In case someone wants to investigate, here is the data 👇
Since recently (26.06.2021 – 30.06.2021), something has changed, and K3d versions 1.17 and 1.16 cannot start properly now. It was fine in GitHub Actions on 26.06.2021 with K3d 1.17 & 1.16. It remains fine with K3s 1.18-1.21. It is all fine on Macbook with all the recent and old versions of K3d. Versions of K3d seems to have no effect on the issue (in either way), but maybe I tried it wrong.
I.e., between Jun 26 and Jun 30, something has broken specifically in GitHub Actions and specifically with K3s 1.16-1.17.
k3d installed into /usr/local/bin/k3d
Run 'k3d --help' to see what you can do with it.
k3d version v4.4.3
k3s version v1.20.6-k3s1 (default)
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-k3s-default' (d7c8ddf7787d2ce7e4c64f1389b86352cc9bb447e33fc7b468ac2e70f4521930)
INFO[0000] Created volume 'k3d-k3s-default-images'
INFO[0001] Creating node 'k3d-k3s-default-server-0'
INFO[0001] Pulling image 'rancher/k3s:v1.17.17-k3s1'
INFO[0008] Creating LoadBalancer 'k3d-k3s-default-serverlb'
INFO[0008] Pulling image 'docker.io/rancher/k3d-proxy:v4.4.3'
INFO[0011] Starting cluster 'k3s-default'
INFO[0011] Starting servers...
INFO[0011] Starting Node 'k3d-k3s-default-server-0'
INFO[0018] Starting agents...
INFO[0018] Starting helpers...
INFO[0018] Starting Node 'k3d-k3s-default-serverlb'
INFO[0019] (Optional) Trying to get IP of the docker host and inject it into the cluster as 'host.k3d.internal' for easy access
WARN[0022] Failed to patch CoreDNS ConfigMap to include entry '172.18.0.1 host.k3d.internal': Exec process in node 'k3d-k3s-default-server-0' failed with exit code '1'
INFO[0022] Successfully added host record to /etc/hosts in 2/2 nodes
INFO[0022] Cluster 'k3s-default' created successfully!
INFO[0022] --kubeconfig-update-default=false --> sets --kubeconfig-switch-context=false
INFO[0022] You can now use it like this:
kubectl config use-context k3d-k3s-default
kubectl cluster-info
Then, cluster polling begins with kubectl get serviceaccount default every 1 second:
Error from server (NotFound): serviceaccounts "default" not found
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get serviceaccounts default)
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get serviceaccounts default)
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get serviceaccounts default)
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get serviceaccounts default)
Error from server (NotFound): serviceaccounts "default" not found
…………
And so until the build times out.
The only essential difference I have found is this — though, most likely, unrelated:
In newly broken builds:
INFO[0017] Starting Node 'k3d-k3s-default-serverlb'
INFO[0018] (Optional) Trying to get IP of the docker host and inject it into the cluster as 'host.k3d.internal' for easy access
WARN[0021] Failed to patch CoreDNS ConfigMap to include entry '172.18.0.1 host.k3d.internal': Exec process in node 'k3d-k3s-default-server-0' failed with exit code '1'
INFO[0021] Successfully added host record to /etc/hosts in 2/2 nodes
INFO[0021] Cluster 'k3s-default' created successfully!
In old & new successful builds:
INFO[0014] Starting Node 'k3d-k3s-default-serverlb'
INFO[0015] (Optional) Trying to get IP of the docker host and inject it into the cluster as 'host.k3d.internal' for easy access
INFO[0018] Successfully added host record to /etc/hosts in 2/2 nodes and to the CoreDNS ConfigMap
INFO[0018] Cluster 'k3s-default' created successfully!
Kubernetes 1.16 & 1.17 are broken in K3d/K3s in GitHub Actions and are not worth fixing it (see nolar/setup-k3d-k3s#11).
So, we can fully drop them in Kopf. Which, in turn, allows us to drop CRD v1beta1 support (in tests & CI).
Signed-off-by: Sergey Vasilyev <[email protected]>
TL;DR: Kubernetes 1.16 & 1.17 are broken and will not be fixed because they are EOL (as of June 2021), the effort to investigate the issue is too high and is not worth it. The lowest version that works is 1.18.
This issue is created to be pinned and visible.
Related: k3d-io/k3d#663
In case someone wants to investigate, here is the data 👇
Since recently (26.06.2021 – 30.06.2021), something has changed, and K3d versions 1.17 and 1.16 cannot start properly now. It was fine in GitHub Actions on 26.06.2021 with K3d 1.17 & 1.16. It remains fine with K3s 1.18-1.21. It is all fine on Macbook with all the recent and old versions of K3d. Versions of K3d seems to have no effect on the issue (in either way), but maybe I tried it wrong.
I.e., between Jun 26 and Jun 30, something has broken specifically in GitHub Actions and specifically with K3s 1.16-1.17.
The cluster is created with:
k3d cluster create --wait --image=rancher/k3s:v1.17.17-k3s1
(as here)The failure is e.g. here:
Then, cluster polling begins with
kubectl get serviceaccount default
every 1 second:And so until the build times out.
The only essential difference I have found is this — though, most likely, unrelated:
In newly broken builds:
In old & new successful builds:
GitHub Actions, Ubuntu 20.04.2.
K3d version 4.4.6, 4.4.4, 4.4.3.
Docker:
(e.g. here)
The text was updated successfully, but these errors were encountered: