Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Registries used in k/k CI should be on Kubernetes Community infra #1458

Open
4 of 7 tasks
justaugustus opened this issue Dec 2, 2020 · 28 comments
Open
4 of 7 tasks
Assignees
Labels
area/artifacts Issues or PRs related to the hosting of release artifacts for subprojects area/release-eng Issues or PRs related to the Release Engineering subproject lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/k8s-infra Categorizes an issue or PR as relevant to SIG K8s Infra. sig/release Categorizes an issue or PR as relevant to SIG Release. sig/testing Categorizes an issue or PR as relevant to SIG Testing.

Comments

@justaugustus
Copy link
Member

justaugustus commented Dec 2, 2020

ref: @spiffxp at kubernetes/kubernetes#97002 (comment):

https://kubernetes.slack.com/archives/C09QZ4DQB/p1606896985218000

The project hosting the GCR repo was swept up by a security audit because it hadn't been properly accounted for. That change has been reverted. Now waiting to see affected jobs go back to green.

We should create a community-owned equivalent project, I'll open a followup issue for that

gcr.io/k8s-authenticated-test is the specific case here, but we should try to cover as many as we find.

/assign
/sig release
/area release-eng artifacts
cc: @kubernetes/release-engineering @kubernetes/ci-signal @thockin

Tagging since they also expressed interest:
@thejoycekung @jeremyrickard @aojea

EDIT(@spiffxp): Broke down into issues per GCP project

And not a GCP project, but

@k8s-ci-robot k8s-ci-robot added sig/release Categorizes an issue or PR as relevant to SIG Release. area/release-eng Issues or PRs related to the Release Engineering subproject area/artifacts Issues or PRs related to the hosting of release artifacts for subprojects labels Dec 2, 2020
@justaugustus justaugustus changed the title Registries for used in k/k CI should be on Kubernetes Community Registries for used in k/k CI should be on Kubernetes Community infra Dec 2, 2020
@justaugustus justaugustus changed the title Registries for used in k/k CI should be on Kubernetes Community infra Registries used in k/k CI should be on Kubernetes Community infra Dec 2, 2020
@spiffxp
Copy link
Member

spiffxp commented Dec 3, 2020

/sig testing

@spiffxp
Copy link
Member

spiffxp commented Jan 21, 2021

/milestone v1.21
This covers a subset of projects in umbrella issue #1469

@k8s-ci-robot k8s-ci-robot added this to the v1.21 milestone Jan 21, 2021
@spiffxp spiffxp added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Jan 22, 2021
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 22, 2021
@ameukam
Copy link
Member

ameukam commented Apr 22, 2021

/remove-lifecycle stale
/milestone clear

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 2, 2022
@ameukam
Copy link
Member

ameukam commented Aug 2, 2022

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 2, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 31, 2022
@ameukam
Copy link
Member

ameukam commented Oct 31, 2022

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 31, 2022
@riaankleinhans riaankleinhans moved this to Migrate away issues in registry.k8s.io (SIG K8S Infra) Jan 4, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 29, 2023
@riaankleinhans
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 29, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 29, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 29, 2023
@ameukam
Copy link
Member

ameukam commented Jun 24, 2023

/remove-lifecycle rotten
/lifecycle frozen
/milestone clear

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jun 24, 2023
@k8s-ci-robot k8s-ci-robot removed this from the v1.25 milestone Jun 24, 2023
@k8s-ci-robot k8s-ci-robot added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Jun 24, 2023
@ameukam ameukam moved this to Backlog in SIG K8S Infra Dec 18, 2024
@BenTheElder
Copy link
Member

gcr.io/k8s-authenticated-test is the specific case here, but we should try to cover as many as we find.

this and the other authenticated test repo are pretty much the worst, and just an overall antipattern

gke-release: #1525

at this point this is basically only used to setup GPUs on GCE, the problem with exiting this one is thorny licensing, we're not depending on it in test cases, release artifacts, etc, just setting up GPUs on kube-up.sh

we could probably figure it out, but I'm not sure it's worth it. we're always going to be dependent on a third-party for the forseeable future on gpu setup, and we only have like one job using this.

the authenticated test tests are not great, and I'm leaning towards we should just delete the tests due to lack of response on sorting out a sustainable alternative (which would require input from the owners of the code under test)

hosting a public endpoint with hardcoded login / service account key in the test binaries was a terrible hack and not something we should replicate or continue to depend on, regardless of who owns it. if someone wanted to rebuild these tests they could create an in-cluster registry with auth or something (running on hostnet?)

see also: kubernetes/kubernetes#97026, kubernetes/kubernetes#113925

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/artifacts Issues or PRs related to the hosting of release artifacts for subprojects area/release-eng Issues or PRs related to the Release Engineering subproject lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/k8s-infra Categorizes an issue or PR as relevant to SIG K8s Infra. sig/release Categorizes an issue or PR as relevant to SIG Release. sig/testing Categorizes an issue or PR as relevant to SIG Testing.
Projects
Status: Backlog
Development

No branches or pull requests

10 participants