-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
K8s 1.23.14 (default) deployment fails #321
Comments
Thanks, @chess-knight ! |
I agree we should support lower versions and I think that patching is in this case better option, but if we want to patch the registry location, we should do it correctly. We can use e.g. cluster-api logic for that. I also found that this PR adds "critical" kubernetes patch versions as defaults, so at least it will not fail by default as in this case. |
So we copy the capi knowledge of whether kube-proxy is at the old location (k8s.gcr.io) or at the new one (registry.k8s.io). As the logic became somewhat involed, split it out into a separate script that can be tested more easily. Signed-off-by: Kurt Garloff <[email protected]>
Should be fixed by #324. |
* Address #321: Finegrained repo location patching. So we copy the capi knowledge of whether kube-proxy is at the old location (k8s.gcr.io) or at the new one (registry.k8s.io). As the logic became somewhat involed, split it out into a separate script that can be tested more easily. * Fix parsing v1.xx (without a patchlevel). * Make k8s version parsing more robust, improve comments. * We might have v2.1.y (with single-digit minor version) some day, so make version parsing robust against it. Thanks @joshmue for pointing this out. * More images, not just kube-proxy might be affected. Thanks @chess-knight. * The registry location fixup is needed for new k8s (~Nov 2022), not >1.21 (thanks again @chess-knight). Signed-off-by: Kurt Garloff <[email protected]>
Addressed by merged #324. |
* Address #321: Finegrained repo location patching. So we copy the capi knowledge of whether kube-proxy is at the old location (k8s.gcr.io) or at the new one (registry.k8s.io). As the logic became somewhat involed, split it out into a separate script that can be tested more easily. * Fix parsing v1.xx (without a patchlevel). * Make k8s version parsing more robust, improve comments. * We might have v2.1.y (with single-digit minor version) some day, so make version parsing robust against it. Thanks @joshmue for pointing this out. * More images, not just kube-proxy might be affected. Thanks @chess-knight. * The registry location fixup is needed for new k8s (~Nov 2022), not >1.21 (thanks again @chess-knight). Signed-off-by: Kurt Garloff <[email protected]>
* Address #321: Finegrained repo location patching. So we copy the capi knowledge of whether kube-proxy is at the old location (k8s.gcr.io) or at the new one (registry.k8s.io). As the logic became somewhat involed, split it out into a separate script that can be tested more easily. * Fix parsing v1.xx (without a patchlevel). * Make k8s version parsing more robust, improve comments. * We might have v2.1.y (with single-digit minor version) some day, so make version parsing robust against it. Thanks @joshmue for pointing this out. * More images, not just kube-proxy might be affected. Thanks @chess-knight. * The registry location fixup is needed for new k8s (~Nov 2022), not >1.21 (thanks again @chess-knight). Signed-off-by: Kurt Garloff <[email protected]>
Similar problem to #303.
Deployment gets stuck during bootstrap of first control plane because coredns container image cannot be pulled.
Example log of containerd on control plane:
I tested it and it seems that registry.k8s.io is used only from v1.23.15.
This also applies to v1.22.17 and v1.24.9.
PR #313 only checks major and minor versions, which seems insufficient.
I found that it was is fixed in cluster-api v1.2.9 so maybe we do not need this registry patching logic at all. Maybe we also don't need
imageRepository
in the KubeadmControlPlane spec. This issue in cluster-api is probably also related.The text was updated successfully, but these errors were encountered: