Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Image fetched from GCR container image registry #15352

Open
sftim opened this issue Nov 13, 2022 · 13 comments
Open

Image fetched from GCR container image registry #15352

sftim opened this issue Nov 13, 2022 · 13 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@sftim
Copy link

sftim commented Nov 13, 2022

What Happened?

All (?) container images should come from registry.k8s.io per #14769

However, Minikube is using GCR for its own images such as gcr.io/k8s-minikube/storage-provisioner and gcr.io/k8s-minikube/kicbase.

/kind bug
I think (not sure)

Attach the log file

log.txt

Operating System

Ubuntu

Driver

KVM2

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Nov 13, 2022
@afbjorklund
Copy link
Collaborator

afbjorklund commented Nov 14, 2022

So far it only applies to the Kubernetes images for the bootstrapper, i.e. the ones listed with kubeadm config images list

But I haven't seen any discussion about moving the distribution of minikube images or dashboard images (they still use docker.io) ?

i.e. "k8s.gcr.io" (gcr.io/k8s-artifacts-prod) is separate from "gcr.io/k8s-minikube"

EDIT: the kubernetes dashboard has been removed from the default preload

@afbjorklund
Copy link
Collaborator

afbjorklund commented Nov 14, 2022

Note that most of the minikube installations will pull the images from GCS and not from GCR, as part of the "preload" :

https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4

IMAGE                                     TAG                 IMAGE ID            SIZE
gcr.io/k8s-minikube/storage-provisioner   v5                  6e38f40d628db       31.5MB
k8s.gcr.io/pause                          3.6                 6270bb605e12e       683kB
registry.k8s.io/coredns/coredns           v1.9.3              5185b96f0becf       48.8MB
registry.k8s.io/etcd                      3.5.4-0             a8a176a5d5d69       300MB
registry.k8s.io/kube-apiserver            v1.25.3             0346dbd74bcb9       128MB
registry.k8s.io/kube-controller-manager   v1.25.3             6039992312758       117MB
registry.k8s.io/kube-proxy                v1.25.3             beaaf00edd38a       61.7MB
registry.k8s.io/kube-scheduler            v1.25.3             6d23ec0e8b87e       50.6MB
registry.k8s.io/pause                     3.8                 4873874c08efc       711kB

The duplicate "pause" is a bug, to be fixed with cri-dockerd config.

386M .minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4

This also includes the binaries, which otherwise would have been pulled from GCS (as part of the "kubernetes-release"):

https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/linux/amd64/kubeadm
https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/linux/amd64/kubelet
https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/linux/amd64/kubectl

They should really pull from dl.k8s.io instead, if not using "preload":

So far it is mostly the "none" driver that misses preload...

It should be possible to do a cache tarball for it too, though ?

8.7M .minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
111M .minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0
16M .minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3
17M .minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3
20M .minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3
32M .minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3
332K .minikube/cache/images/amd64/registry.k8s.io/pause_3.8
35M .minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3
42M .minikube/cache/linux/amd64/v1.25.3/kubeadm
43M .minikube/cache/linux/amd64/v1.25.3/kubectl
109M .minikube/cache/linux/amd64/v1.25.3/kubelet

@sftim
Copy link
Author

sftim commented Nov 15, 2022

We may be able reduce our spend on GCP by fetching the "preload" from elsewhere. I even wonder if something like a BitTorrent fetch could be an option. The trouble is: that's a lot of new code to consider.

@afbjorklund
Copy link
Collaborator

Currently the download is optimized for speed and not for size, so it is much bigger than what it could have been (with xz).

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 13, 2023
@sftim
Copy link
Author

sftim commented Feb 13, 2023

/priority important-soon

/remove-lifecycle stale

I'm recommending this priority level because Kubernetes as a project is consuming excessive budget for container image pulls.

@k8s-ci-robot k8s-ci-robot added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 13, 2023
@sftim
Copy link
Author

sftim commented Feb 13, 2023

Using effective compression (zstd, zopfli, etc) also benefits Kubernetes.

@sftim
Copy link
Author

sftim commented Feb 13, 2023

Relevant to kubernetes/k8s.io#4738

@AggRag
Copy link

AggRag commented Apr 18, 2023

Hi, I am using an older version of minikube which is pulling ingress add on image from k8s.gcr.io, which has been already frozen by the Kubernetes community. Please suggest what changes should I do to my minikube cluster to avoid any future impacts because of this.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Apr 18, 2023

Unless the code can be changed (preferrably), you can pull the image from registry.k8s.io and re-tag it as k8s.gcr.io locally.

However, there should be a redirect in place.

@AggRag
Copy link

AggRag commented Apr 18, 2023

@afbjorklund, thank you for your response. Yes, it is currently redirecting to registry.k8s.io, but I am concerned that k8s.gcr.io will be taken down in the near future. Will upgrading to the most recent minikube version resolve this issue?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 17, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests

5 participants