-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migrate non-Kubernetes repos off of prow.k8s.io #12863
Comments
cc @nikhita for client-go/unofficial-docs |
cc @Random-Liu for containerd/cri |
This is not active and can be removed even right now. 👍 |
For google owned projects' owners: feel free to migrate to https://github.com/GoogleCloudPlatform/oss-test-infra instead |
@krzyzacy Do we have owners/contacts for the google owned projects? |
Looking at the configs, we probably don't need to worry about cncf/apisnoop (CNCF project and only using meow/bark). We also may want to keep helm/charts around as it is also a CNCF project and may provide valuable signal. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale
…On Sun, Mar 1, 2020 at 7:11 AM fejta-bot ***@***.***> wrote:
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually
close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta
<https://github.com/fejta>.
/lifecycle stale
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#12863?email_source=notifications&email_token=AD24BUG2BYDATE5LAWGY6HDRFG4L5A5CNFSM4HSXM4A2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOENMOMPI#issuecomment-593028669>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AD24BUHDX7MEGSGX4OETTY3RFG4L5ANCNFSM4HSXM4AQ>
.
|
@clarketm posted the following over in #16974 (comment) For each project we choose to migrate: a proposal to minimize risk of downtime is:
|
/priority important-soon |
It is probably inadvisable to have k8s-ci-robot remain an org owner of github.com/cncf, IMHO, it already has the keys to a lot. |
@BenTheElder removing all bot interactions other than trigger to support the K8s focus prow jobs here : https://github.com/kubernetes/test-infra/pull/26418/files#r883159775 |
Currently @k8s-ci-robot only admin perms on the We could reduce the perms lower than admin as we now only need a trigger for the jobs, not the bot interactions. |
Why do we need trigger from the repo? I found some image pushing jobs, but that also means that more generally this repo is dependent on SIG k8s infra infrastructure for GCB + image hosting. Seems like you're saying this project is really a subproject of SIG architecture, is there a reason it's not in kubernetes-sigs? |
@BenTheElder is https://github.com/GoogleCloudPlatform/k8s-cluster-bundle open to work upon? |
I'm not sure how much people who are not admins of the respective repos can work on this, you need admin access and support from go.k8s.io/oncall test-infra-oncall to handle the webhook transitions unfortunately. |
cc @aojea @mpherman2 @cjwagner @michelle192837 I think we should put a deadline on this and then just disable any remaining jobs after the deadline, they can be spun back up on a different prow later. I've filed bugs with all the remaining repos that have lingered for approximately 1 year now ... We can even call it something generous like EOY 2023, but this really needs to be finished so we can look to migrate prow.k8s.io to the community infrastructure now that we have the GCP run rate more reasonable. |
+1 to deadline, and I think we should be more aggressive with the deadline than EOY. Like 3 months tops. |
I don't see prow.k8s.io migrating this year as we're still figuring out how to run CI on AWS and while the run rate is vastly improved we overspent earlier this year, so I'm ambivalent about how aggressive the deadline is. We have however had open bugs about this for about a year with each remaining project and this bug is nearly 4 years old. I think we also need to revisit the exceptions permitted, if we're permitting apisnoop and containerd, then you might argue for cadvisor as a node dep. |
+1 to document and have a process around exceptions. |
Reached out to my best guess at current owners for cadvisor, rules_k8s, k8s-cluster-bundle. |
Part of: - kubernetes#12863 Remove unused jobs Signed-off-by: Arnaud Meukam <[email protected]>
Apisnoop is currently in transition to be Kubernetes sub-project: |
Will be finishing this shortly https://groups.google.com/a/kubernetes.io/g/dev/c/p6PAML90ZOU I think https://github.com/GoogleCloudPlatform/k8s-cluster-bundle is the main one remaining after api snoop, unless we change our minds about closely related projects like cAdvisor, containerd. |
#32089 should pretty much wrap this up |
google/cadvisor#3116 is probably the one follow-up at this point, will track there. |
Currently, the Kubernetes community prow instance (prow.k8s.io) is supporting a number of non-Kubernetes project repositories. We need to migrate these off the Kubernetes community infrastructure ahead of moving our instance onto project owned infrastructure.
We should also stop adding any new repos/orgs to the prow.k8s.io instance that aren't directly involved with the Kubernetes project.
Currently, I see the following repos/orgs with configuration in config.yaml/plugins.yaml:
https://github.com/containerd/crihttps://github.com/containerd/containerdcc: @kubernetes/k8s-infra-team @kubernetes/sig-testing @fejta @cjwagner @Katharine @krzyzacy @amwat @michelle192837
The text was updated successfully, but these errors were encountered: