-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Describe and write down the CI policy #9
Comments
@cpanato: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
will close because it is a duplicate of kubernetes/test-infra#18551 /close |
@cpanato: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Reopening this to evaluate how the current release blocking and release informing policy compares with whats proposed in kubernetes/test-infra#18599 . /reopen |
@alejandrox1: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I would recommend we strive to make the criteria something that can be enforced via tests or automation. I still think the final decision should come down to humans, but it's not clear to me how often people really check against adherence to these criteria. Taking a look at release-blocking criteria
This used to be charted;
If every job is a prowjob, we could statically check the job configs that use
We don't measure this currently, is it possible for us to do so? Or should we use some other measure?
Ownership is enforced via static checks against the testgrid config. "That is responsive" though, I'm not sure how we measure that?
This used to be charted; I think testgrid's summary page shows how many out of 10 recent columns passed, but I'm not sure if we measure this over time? |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
added this to my next week backlog |
Took my freedom to rename the issue to optically match with others. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle stale |
I would like to move this over to kubernetes/sig-testing with the intent of tackling this in v1.22, any objections? |
Sounds good! Thank you for catching up with this 🙏 |
/sig testing |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@spiffxp: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@cpanato: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/sig testing |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Describe the CI policy for jobs, like:
We have a policy for blocking and informing jobs, https://github.com/kubernetes/sig-release/blob/master/release-blocking-jobs.md .
If we compare this policy with what is proposed in kubernetes/test-infra#18599, what would we add? what would we change?
We should evaluate what changes we need to make to help ensure we are acting on useful information and check tht CI jobs are maintained and well.
/area release-eng
/area ci
/kind documentation
/priority important-soon
/milestone v1.20
The text was updated successfully, but these errors were encountered: