-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
create qualification tests for images in staging #144
Comments
/assign rajibmitra |
For me it’s ok :)
I just want to follow the work also (as a curious person!)
Em ter, 29 de out de 2019 às 07:28, Rajib Mitra <[email protected]>
escreveu:
… I would like to work on the prow job if its ok @listx
<https://github.com/listx> @rikatz <https://github.com/rikatz>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#144>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABWZQBN5OBK3RC3GP7644E3QRAF3FANCNFSM4JDAQWQA>
.
|
Sounds good to me. Please write a design doc of some sort to sketch out your thoughts first before pushing up an implementation. You can then share it with https://groups.google.com/forum/#!newtopic/kubernetes-wg-k8s-infra |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
@rajibmitra do you need something (any help, insights) to write that document? :) |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen The work that @yodahekinsew is doing touches on this area, as he is expanding the number of various tests to check against staging images. See https://docs.google.com/document/d/1D5FEU58j3_V8Fxl98I3vcqNCxzpD9cV9Gtpxf9wGO4I/edit#heading=h.4bti0lhcgu9m. |
@listx: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Rotten issues close after 30d of inactivity. Send feedback to sig-contributor-experience at kubernetes/community. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen IntroOK so we got bitten by the lack of QA tests for staging images with kubernetes/test-infra#23148 (reverted in kubernetes/test-infra#23151). There are several things going on, but let me capture what's happening today. You can think of this comment as a mini-postmortem doc. Current state of affairsStep 1: Code changes to CIP repoWe merge PRs into this repo to update the Step 2: Promotion from staging to prodLater, when the powers that be deem that the image should be promoted to production, we create a PR to "promote the promoter", like this. When the PR is merged, the staging image is promoted to prod (via post-k8sio-image-promo). Step 3: Update of Prow Job configurationFinally, we can now also update existing Prow Jobs that rely on the promoter to use this new production version. A PR to update the Prow Jobs is created, to bump the image version used, like this. The fire/problemNow consider kubernetes/test-infra#23148. Alas, it needed to be reverted. The problem (captured in the logs) was that the CLI flag handling logic changed between the old version used prior to this PR (version See https://kubernetes.slack.com/archives/CJH2GBF7Y/p1628202615135000 for initial thoughts on this problem. Proposed changesSo how do we prevent this problem in the future? Apart from a more formalized process for designating new promoter images for production (involving more review from the Release Engineering group), there are additional changes we can make:
I think we should do all 3 of the above, in the order they are listed. But how can we maximize code reuse across all 3 changes? It would be painful to have 3 separate sets of tests, requiring us to update each of these separately. I assume changes 2 and 3 can somehow share code, but I'm not sure about the first. |
/assign @listx |
So it's in my queue to help with: FYI @kubernetes-sigs/release-engineering on #144 (comment). |
@justaugustus I think I can help with adding a unit test in Prow (proposed change 1). I'm going to be working more in Prow anyway so it's a good exercise. |
Two questions: Thinking out loud: In addition, we may want to reference this API in the |
Ideally yes; I'll try to keep this thread synced with any new issues related to this area.
Yeah we could use some sort of standardized API definition for CIP and related tools. Maybe that "source of truth" will be in the form of a YAML configuration defined for https://github.com/GoogleContainerTools/container-structure-test. (Or is just rolling our own "CLI API" YAML definition better? IDK)
Not sure what you mean by core vs extra features, but yeah if we just have some piece of YAML that defines what the API should be, we can run with it for READMEs, docs, changelogs, etc. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
As noted in kubernetes/k8s.io#406 (review), the staging image up for promotion should have a link to it being tested (maybe some more thorough e2e tests going through various flags, etc). This could be a Prow postsubmit job. That job could run against
master
branch, and for every update to it:Then the logs for this job itself could be linked-to for every promotion PR promoting this newly-tested version to prod.
The text was updated successfully, but these errors were encountered: