-
Notifications
You must be signed in to change notification settings - Fork 832
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Freeze the old k8s.gcr.io
image registry
#5035
Freeze the old k8s.gcr.io
image registry
#5035
Conversation
@upodroid: GitHub didn't allow me to request PR reviews from the following users: kubernetes/release-engineering. Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Not sure to understand the question but let's add the new AR regions in a different PR and probably after the redirect is done. the regional-equivalent buckets for those new AR repositories are still not created. I also suggest we merge this PR on April 3th morning CET. |
sounds good /hold till the 3rd of April
Correct. I meant regions instead of rules |
43f8f2f
to
d8ec07c
Compare
c23a9b6
to
20efbb7
Compare
See: |
20efbb7
to
a3af621
Compare
/hold we're in progress to fully redirecting some images including tag lists currently so actually enacting the freeze is going to put us in a lose lose if we find any issues. Will start a slack thread for higher bandwidth in a moment. This is due to the skew issue with backing stores. |
@puerco Did we manage to fix the missing sigstore layers in the backing registries? |
@upodroid We merged some improvements to the image promoter which should allow it to run with the AR rate limit, this should stop the problem from getting bigger. The missing sigantures are still missing. We haven't run the remediation jobs yet as we agreed to wait until the redirect rollout is done, or at least getting it green lighted when there is enough confidence enough that the redirect is still within its error budget. |
Ready on this yet? |
Yes, yesterday we lifted kubernetes/registry.k8s.io#181, however the ROI is still lower than pre-redirect and many folks are away at KubeCon so it might not be the best week for this. I do still think we should do this. |
We're now post-KubeCon. |
/approve |
/lgtm |
a3af621
to
1580a96
Compare
I rebased the PR and picked up the latest set of registries. This is ready now |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: BenTheElder, thockin, upodroid The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Fixes: kubernetes/release#2947
Fixes: kubernetes/enhancements#3720
Merging this PR will freeze the old registry.
/cc @kubernetes/release-engineering @dims @BenTheElder