-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Image fetched from GCR container image registry #15352
Comments
So far it only applies to the Kubernetes images for the bootstrapper, i.e. the ones listed with But I haven't seen any discussion about moving the distribution of minikube images or dashboard images (they still use docker.io) ? i.e. "k8s.gcr.io" (gcr.io/k8s-artifacts-prod) is separate from "gcr.io/k8s-minikube" EDIT: the kubernetes dashboard has been removed from the default preload |
Note that most of the minikube installations will pull the images from GCS and not from GCR, as part of the "preload" :
The duplicate "pause" is a bug, to be fixed with cri-dockerd config. 386M .minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 This also includes the binaries, which otherwise would have been pulled from GCS (as part of the "kubernetes-release"): https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/linux/amd64/kubeadm They should really pull from So far it is mostly the "none" driver that misses preload... It should be possible to do a cache tarball for it too, though ? 8.7M .minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 |
We may be able reduce our spend on GCP by fetching the "preload" from elsewhere. I even wonder if something like a BitTorrent fetch could be an option. The trouble is: that's a lot of new code to consider. |
Currently the download is optimized for speed and not for size, so it is much bigger than what it could have been (with xz). |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/priority important-soon /remove-lifecycle stale I'm recommending this priority level because Kubernetes as a project is consuming excessive budget for container image pulls. |
Using effective compression (zstd, zopfli, etc) also benefits Kubernetes. |
Relevant to kubernetes/k8s.io#4738 |
Hi, I am using an older version of minikube which is pulling ingress add on image from k8s.gcr.io, which has been already frozen by the Kubernetes community. Please suggest what changes should I do to my minikube cluster to avoid any future impacts because of this. |
Unless the code can be changed (preferrably), you can pull the image from registry.k8s.io and re-tag it as k8s.gcr.io locally. However, there should be a redirect in place. |
@afbjorklund, thank you for your response. Yes, it is currently redirecting to registry.k8s.io, but I am concerned that k8s.gcr.io will be taken down in the near future. Will upgrading to the most recent minikube version resolve this issue? |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
What Happened?
All (?) container images should come from
registry.k8s.io
per #14769However, Minikube is using GCR for its own images such as
gcr.io/k8s-minikube/storage-provisioner
andgcr.io/k8s-minikube/kicbase
./kind bug
I think (not sure)
Attach the log file
log.txt
Operating System
Ubuntu
Driver
KVM2
The text was updated successfully, but these errors were encountered: