Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace k8s.gcr.io with registry.k8s.io #15777

Closed
wants to merge 1 commit into from

Conversation

mrbobbytables
Copy link
Member

k8s.gcr.io is in the process of being deprecated with future images no longer being served from that endpoint.

This is a general "find & replace" I ignored the changelog, but updated everything else.

For more information - see this blog post: https://kubernetes.io/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: mrbobbytables
Once this PR has been reviewed and has the lgtm label, please assign prezha for approval by writing /assign @prezha in a comment. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Feb 2, 2023
@k8s-ci-robot k8s-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Feb 2, 2023
@minikube-bot
Copy link
Collaborator

Can one of the admins verify this patch?

@afbjorklund
Copy link
Collaborator

afbjorklund commented Feb 3, 2023

@mrbobbytables : the goal with the code changes, was to have the minikube cache/preload match kubeadm.

This is why there was a cut-off, so that we don't have to invalidate old caches and old preloads needlessly ?

So it is good that the examples and tests are updated, but not changing e.g. 1.20 was done on purpose...

Versions -1.21 use old, versions 1.25+ use new. The others have their cutoff points, 1.22.18, 1.23.15, 1.24.9

cache/images/k8s.gcr.io/kube-addon-manager_v9.0
cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13
cache/images/k8s.gcr.io/kube-proxy_v1.14.0
cache/images/registry.k8s.io/k8s-dns-sidecar-amd64_1.14.13
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the kubernetes version needs to be updated, in this example (not changing anything for 1.14)

@@ -294,7 +294,7 @@ Steps:
asserts basic "service" command functionality

Steps:
- Create a new `k8s.gcr.io/echoserver` deployment
- Create a new `registry.k8s.io/echoserver` deployment
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The echoserver no longer lives in the main registry, it has been forked to multiple locations.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For minikube, I think k8s.gcr.io/echoserver:1.4 was replaced with kicbase/echo-server:1.0

@@ -76,7 +76,7 @@ type ClusterConfig struct {
KubernetesConfig KubernetesConfig
Nodes []Node
Addons map[string]bool
CustomAddonImages map[string]string // Maps image names to the image to use for addons. e.g. Dashboard -> k8s.gcr.io/echoserver:1.4 makes dashboard addon use echoserver for its Dashboard deployment.
CustomAddonImages map[string]string // Maps image names to the image to use for addons. e.g. Dashboard -> registry.k8s.io/echoserver:1.4 makes dashboard addon use echoserver for its Dashboard deployment.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the echoserver image needs to be updated to the new one

@@ -21,7 +21,7 @@ import (
)

// OldDefaultKubernetesRepo is the old default Kubernetes repository
const OldDefaultKubernetesRepo = "k8s.gcr.io"
const OldDefaultKubernetesRepo = "registry.k8s.io"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changing the old registry to be the same as the new, makes this rather meaningless.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah yeah - sorry, this was a more generic find & replace. Will update

"k8s.gcr.io/coredns:1.6.2",
"k8s.gcr.io/etcd:3.3.15-0",
"k8s.gcr.io/pause:3.1",
"registry.k8s.io/kube-proxy:v1.16.0",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not changing anything for 1.16 ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I must have accidentally just replaced that 1 line vs all in the file, will update

@afbjorklund
Copy link
Collaborator

The old echoserver needs to be updated to the new location,

@spowelljr I think this will still use the "kicbase" echoserver ?

@mrbobbytables
Copy link
Member Author

@afbjorklund should I just close this? It looks like things are being tackled / tracked separately in #14769

FWIW - theres a policy discussion going on about agining out old images, we're sadly still on track to exceed our 3M in GCP credits (to the tune of an additional 1M 😬 ) and are looking for ways to reduce costs.

Outside of policy decisions on removing old images, one of the big things we can do is get as much user facing things to switch over to the registry.k8s.io which will spread the load across multiple providers. Updating k/website and this repo looked like an easy win because they are so frequently used / referenced by end users.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Feb 6, 2023

A little reminder is a good thing, it missed the minikube 1.29.0 release for instance.

I think the best approach for minikube, would be to make a "preload" version also for the "none" driver.
Instead of tarring up the entire container storage (which doesn't work), it could include all the images ?

428M	preloaded-images-k8s-v18-v1.26.1-containerd-overlay2-amd64.tar.lz4
429M	preloaded-images-k8s-v18-v1.26.1-cri-o-overlay-amd64.tar.lz4
398M	preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4

Currently it is pulling them one by one from the registry, and storing them in cache (without preload)

But when there is a combined tarball, that could be used instead - and also for the "none" driver.
The only thing was that Podman was buggy with multiple images in one archive, but I think it is fixed...

Basically: kubeadm config images list | xargs docker save

@mrbobbytables
Copy link
Member Author

I'll close this for now and move things over to separate PRs, and track in #14769

/close

@k8s-ci-robot
Copy link
Contributor

@mrbobbytables: Closed this PR.

In response to this:

I'll close this for now and move things over to separate PRs, and track in #14769

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Feb 6, 2023

@mrbobbytables if there is any other discussion on how to make the downloads smaller, please invite

I don't know if the preload bucket and release bucket are also in the same spreadsheet, somewhere...

https://storage.googleapis.com/minikube-preloaded-volume-tarballs/

https://storage.googleapis.com/kubernetes-release/

@mrbobbytables
Copy link
Member Author

I don't know of any discussion on making the images smaller at this time, most of the discussion is around shutting traffic to other providers or aging out old image. I don't know if image size would impact much of the traffic right now - from various sources it looks like most users are running older images, and thats what drove the backporting discussion / push to get everyone to use registry.k8s.io.

@afbjorklund
Copy link
Collaborator

Ok, for minikube it is mostly those two.

  1. Downloading all the images into one file, instead of one-by-one with crane like now.
    When not using a daemon, there is nobody to notice that the images are sharing layers.
111M	images/amd64/registry.k8s.io/etcd_3.5.6-0
17M	images/amd64/registry.k8s.io/kube-scheduler_v1.26.1
17M	images/amd64/registry.k8s.io/coredns/coredns_v1.9.3
34M	images/amd64/registry.k8s.io/kube-apiserver_v1.26.1
21M	images/amd64/registry.k8s.io/kube-proxy_v1.26.1
344K	images/amd64/registry.k8s.io/pause_3.9
31M	images/amd64/registry.k8s.io/kube-controller-manager_v1.26.1
230M	total

145M images/amd64/v1.26.1.tar.xz

  1. Downloading a compressed version of the binaries, instead of getting the executables.
    Currently compression (xz) is only offered for the packages (deb, rpm) but not the exe.
116M	linux/amd64/v1.26.1/kubelet
46M	linux/amd64/v1.26.1/kubectl
45M	linux/amd64/v1.26.1/kubeadm
207M	total

39M linux/amd64/v1.26.1.tar.xz

But the URLs should be changed, as well.

@BenTheElder
Copy link
Member

Versions -1.21 use old, versions 1.25+ use new. The others have their cutoff points, 1.22.18, 1.23.15, 1.24.9

But it would only invalidate old caches? How long will minikube continue to support these old versions?
The reason it only goes back so far in Kubernetes is we can no longer release older versions.

https://storage.googleapis.com/minikube-preloaded-volume-tarballs/

I have no idea where this is and who pays for it ... 😬

https://storage.googleapis.com/kubernetes-release/

This is billed to google internally / different bill that SIG K8s Infra. There's other work ongoing to migrate that out to fully community control but it's a lot so we're seeking out other funding before doing so ....

@mrbobbytables if there is any other discussion on how to make the downloads smaller, please invite

We've had pretty active work targeting images like kube-proxy but nothing at this time. distroless kube-proxy shipped a pretty major reduction in 1.25 kubernetes/kubernetes#109406

I think out of the top images, for current tags only ingress-nginx/controller jumps out as something that appears to be particularly large.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 28, 2023

Versions -1.21 use old, versions 1.25+ use new. The others have their cutoff points, 1.22.18, 1.23.15, 1.24.9

But it would only invalidate old caches? How long will minikube continue to support these old versions? The reason it only goes back so far in Kubernetes is we can no longer release older versions.

This was just about the default in kubeadm, the plan is to use registry.k8s.io for everything

I don't see why the version support should be different in minikube, from the rest of the project...
But as long as the tutorials* are running 1.20, I guess some leeway can be "allowed" and tolerated ?

* the old tutorials will soon be deleted anyway

https://storage.googleapis.com/minikube-preloaded-volume-tarballs/

I have no idea where this is and who pays for it ... grimacing

The preload is an optional pre-installed archive, so I guess it is the same as the kind "node" image.

It would be possible to not provide this feature anymore, and use the regular container images.
Maybe generate the preload on the client, but then it would only speed up the second installation ?

@mrbobbytables if there is any other discussion on how to make the downloads smaller, please invite

We've had pretty active work targeting images like kube-proxy but nothing at this time. distroless kube-proxy shipped a pretty major reduction in 1.25 kubernetes/kubernetes#109406

I think out of the top images, for current tags only ingress-nginx/controller jumps out as something that appears to be particularly large.

The preload is all about making the install go faster, so it is actually bigger than the usual images. (it uses lz4)

But it would be possible to make a "batteries included" version and optimize for a small download... (using xz)

Otherwise I think we are hoping for improvements in the container runtimes and registries will "fix" the problem.
Like when allowing for peer-to-peer pulling, and so on ? Having local mirrors of the official registry also helps a lot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants