-
Notifications
You must be signed in to change notification settings - Fork 410
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OCPBUGS-20418: Introduce kubelet-dependencies.target and firstboot-osupdate.target #3967
OCPBUGS-20418: Introduce kubelet-dependencies.target and firstboot-osupdate.target #3967
Conversation
The primary motivation here is to stop pulling container images `Before=network-online.target` because it creates complicated dependency loops. This is aiming to fix https://issues.redhat.com/browse/OCPBUGS-15087 A lot of our services are "explicitly coupled" with ordering relationships; e.g. some had `Before=kubelet.service` but not `Before=crio.service`. systemd .target units are explicitly designed for this situation. We introduce a new `kubelet-dependencies.target` - both `crio.service` and `kubelet.service` are `After+Requires=kubelet-dependencies.target`. And units which are needed for kubelet should now be both `Before + RequiredBy=kubelet-dependencies.target`. Similarly, we had a lot of entangling of the "node services" and the firstboot OS updates, with things explicitly ordering against `machine-config-daemon-pull.service` or poking into the implementation details of the firstboot process with `ConditionPathExists=!/etc/ignition-machine-config-encapsulated.json`. Create a new `firstboot-osupdate.target` that succeds after the `machine-config-daemon-firstboot.service` today. Then most of the "infrastructure workload" that must run only on the second boot (such as `gcp-hostname.service`, `openshift-azure-routes.path` etc) can cleanly order after that. This also aids with the coming work for bare metal installs to do OS udpates at install time, because then we will "finalize" the OS update and continue booting. (cherry picked from commit 2141f4b)
@cgwalters: This pull request references Jira Issue OCPBUGS-20418, which is valid. The bug has been moved to the POST state. 6 validation(s) were run on this bug
Requesting review from QA contact: The bug has been updated to refer to the pull request using the external bug tracker. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/payload 4.14 nightly blocking |
@sdodson: trigger 8 job(s) of type blocking for the nightly release of OCP 4.14
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/c16d1f40-685f-11ee-9a34-c857487b6e1b-0 |
/test e2e-metal-ipi |
Not totally sure what to make of the payload run - the failures offhand look like flakes/failures on the Prow hosting cluster or something? |
/retest |
@cgwalters: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/test e2e-hypershift The failures don't seem to have anything to do with deployment for the nodes, so they're past the point where this change would affect anything. The on-prem jobs passed so from my perspective: |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: cgwalters, cybertron The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/hold for QE pre-merge testing |
In order to verify this PR we executed the following steps:
No issues were found. We can safely assume that the rest of the platforms are tested by the prow jobs required to merge the PR. Because of the way the CI images are stored, we cannot pre-merge execute the "scale" e2e test cases. Hence, even if this PR has the qe-approved label we will have to execute those "scale" e2e test cases post-merge before fully verifying the jira ticket. We can add the qe-approved label /label qe-approved Thank you very much for this fix!! |
@cgwalters: This pull request references Jira Issue OCPBUGS-20418, which is valid. 6 validation(s) were run on this bug
Requesting review from QA contact: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/label cherry-pick-approved |
Just to repeat I think we should give this a bit more time before we add the backport-risk-assessed label...so far I am not aware of any fallout in 4.15 but it's still early. |
/label backport-risk-assessed |
5cadd58
into
openshift:release-4.14
@cgwalters: Jira Issue OCPBUGS-20418: All pull requests linked via external trackers have merged: Jira Issue OCPBUGS-20418 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Fix included in accepted release 4.14.0-0.nightly-2023-11-03-193211 |
This is needed in 4.13 as well. |
@sinnykumari: new pull request created: #4043 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
When overwriting a systemd unit with new content, we need to account for the case where the new unit content has a different `[Install]` section. If it does, then simply overwriting will leak the previous enablement symlinks and become node state. That's OK most of the time, but this can cause real issues as we've seen with the combination of openshift#3967 which does exactly that (changing `[Install]` sections) and openshift#4213 which assumed that those symlinks were cleaned up. More details on that cocktail in: https://issues.redhat.com/browse/OCPBUGS-33694?focusedId=24917003#comment-24917003 Fix this by always checking if the unit is currently enabled, and if so, running `systemctl disable` *before* overwriting its contents. The unit will then be re-enabled (or not) based on the MachineConfig.
When overwriting a systemd unit with new content, we need to account for the case where the new unit content has a different `[Install]` section. If it does, then simply overwriting will leak the previous enablement symlinks and become node state. That's OK most of the time, but this can cause real issues as we've seen with the combination of openshift#3967 which does exactly that (changing `[Install]` sections) and openshift#4213 which assumed that those symlinks were cleaned up. More details on that cocktail in: https://issues.redhat.com/browse/OCPBUGS-33694?focusedId=24917003#comment-24917003 Fix this by always checking if the unit is currently enabled, and if so, running `systemctl disable` *before* overwriting its contents. The unit will then be re-enabled (or not) based on the MachineConfig. Fixes: https://issues.redhat.com/browse/OCPBUGS-33694
When overwriting a systemd unit with new content, we need to account for the case where the new unit content has a different `[Install]` section. If it does, then simply overwriting will leak the previous enablement symlinks and become node state. That's OK most of the time, but this can cause real issues as we've seen with the combination of openshift#3967 which does exactly that (changing `[Install]` sections) and openshift#4213 which assumed that those symlinks were cleaned up. More details on that cocktail in: https://issues.redhat.com/browse/OCPBUGS-33694?focusedId=24917003#comment-24917003 Fix this by always checking if the unit is currently enabled, and if so, running `systemctl disable` *before* overwriting its contents. The unit will then be re-enabled (or not) based on the MachineConfig. Fixes: https://issues.redhat.com/browse/OCPBUGS-33694
When overwriting a systemd unit with new content, we need to account for the case where the new unit content has a different `[Install]` section. If it does, then simply overwriting will leak the previous enablement symlinks and become node state. That's OK most of the time, but this can cause real issues as we've seen with the combination of openshift#3967 which does exactly that (changing `[Install]` sections) and openshift#4213 which assumed that those symlinks were cleaned up. More details on that cocktail in: https://issues.redhat.com/browse/OCPBUGS-33694?focusedId=24917003#comment-24917003 Fix this by always checking if the unit is currently enabled, and if so, running `systemctl disable` *before* overwriting its contents. The unit will then be re-enabled (or not) based on the MachineConfig. Fixes: https://issues.redhat.com/browse/OCPBUGS-33694
When overwriting a systemd unit with new content, we need to account for the case where the new unit content has a different `[Install]` section. If it does, then simply overwriting will leak the previous enablement symlinks and become node state. That's OK most of the time, but this can cause real issues as we've seen with the combination of openshift#3967 which does exactly that (changing `[Install]` sections) and openshift#4213 which assumed that those symlinks were cleaned up. More details on that cocktail in: https://issues.redhat.com/browse/OCPBUGS-33694?focusedId=24917003#comment-24917003 Fix this by always checking if the unit is currently enabled, and if so, running `systemctl disable` *before* overwriting its contents. The unit will then be re-enabled (or not) based on the MachineConfig. Fixes: https://issues.redhat.com/browse/OCPBUGS-33694
When overwriting a systemd unit with new content, we need to account for the case where the new unit content has a different `[Install]` section. If it does, then simply overwriting will leak the previous enablement symlinks and become node state. That's OK most of the time, but this can cause real issues as we've seen with the combination of openshift#3967 which does exactly that (changing `[Install]` sections) and openshift#4213 which assumed that those symlinks were cleaned up. More details on that cocktail in: https://issues.redhat.com/browse/OCPBUGS-33694?focusedId=24917003#comment-24917003 Fix this by always checking if the unit is currently enabled, and if so, running `systemctl disable` *before* overwriting its contents. The unit will then be re-enabled (or not) based on the MachineConfig. Fixes: https://issues.redhat.com/browse/OCPBUGS-33694
When overwriting a systemd unit with new content, we need to account for the case where the new unit content has a different `[Install]` section. If it does, then simply overwriting will leak the previous enablement symlinks and become node state. That's OK most of the time, but this can cause real issues as we've seen with the combination of openshift#3967 which does exactly that (changing `[Install]` sections) and openshift#4213 which assumed that those symlinks were cleaned up. More details on that cocktail in: https://issues.redhat.com/browse/OCPBUGS-33694?focusedId=24917003#comment-24917003 Fix this by always checking if the unit is currently enabled, and if so, running `systemctl disable` *before* overwriting its contents. The unit will then be re-enabled (or not) based on the MachineConfig. Fixes: https://issues.redhat.com/browse/OCPBUGS-33694
When overwriting a systemd unit with new content, we need to account for the case where the new unit content has a different `[Install]` section. If it does, then simply overwriting will leak the previous enablement symlinks and become node state. That's OK most of the time, but this can cause real issues as we've seen with the combination of openshift#3967 which does exactly that (changing `[Install]` sections) and openshift#4213 which assumed that those symlinks were cleaned up. More details on that cocktail in: https://issues.redhat.com/browse/OCPBUGS-33694?focusedId=24917003#comment-24917003 Fix this by always checking if the unit is currently enabled, and if so, running `systemctl disable` *before* overwriting its contents. The unit will then be re-enabled (or not) based on the MachineConfig. Fixes: https://issues.redhat.com/browse/OCPBUGS-33694
When overwriting a systemd unit with new content, we need to account for the case where the new unit content has a different `[Install]` section. If it does, then simply overwriting will leak the previous enablement symlinks and become node state. That's OK most of the time, but this can cause real issues as we've seen with the combination of openshift#3967 which does exactly that (changing `[Install]` sections) and openshift#4213 which assumed that those symlinks were cleaned up. More details on that cocktail in: https://issues.redhat.com/browse/OCPBUGS-33694?focusedId=24917003#comment-24917003 Fix this by always checking if the unit is currently enabled, and if so, running `systemctl disable` *before* overwriting its contents. The unit will then be re-enabled (or not) based on the MachineConfig. Fixes: https://issues.redhat.com/browse/OCPBUGS-33694
Original description: daemon/update: disable systemd unit before overwriting When overwriting a systemd unit with new content, we need to account for the case where the new unit content has a different `[Install]` section. If it does, then simply overwriting will leak the previous enablement symlinks and become node state. That's OK most of the time, but this can cause real issues as we've seen with the combination of openshift#3967 which does exactly that (changing `[Install]` sections) and openshift#4213 which assumed that those symlinks were cleaned up. More details on that cocktail in: https://issues.redhat.com/browse/OCPBUGS-33694?focusedId=24917003#comment-24917003 Fix this by always checking if the unit is currently enabled, and if so, running `systemctl disable` *before* overwriting its contents. The unit will then be re-enabled (or not) based on the MachineConfig. Fixes: https://issues.redhat.com/browse/OCPBUGS-33694
The primary motivation here is to stop pulling
container images
Before=network-online.target
because it creates complicated dependency loops.This is aiming to fix
https://issues.redhat.com/browse/OCPBUGS-15087
A lot of our services are "explicitly coupled" with ordering relationships; e.g. some had
Before=kubelet.service
but notBefore=crio.service
.systemd .target units are explicitly designed for this situation.
We introduce a new
kubelet-dependencies.target
- bothcrio.service
andkubelet.service
areAfter+Requires=kubelet-dependencies.target
. And units which are needed for kubelet should now be bothBefore + RequiredBy=kubelet-dependencies.target
.Similarly, we had a lot of entangling of the "node services" and the firstboot OS updates, with things explicitly ordering against
machine-config-daemon-pull.service
or poking into the implementation details of the firstboot process withConditionPathExists=!/etc/ignition-machine-config-encapsulated.json
.Create a new
firstboot-osupdate.target
that succeds after themachine-config-daemon-firstboot.service
today. Then most of the "infrastructure workload" that must run only on the second boot (such asgcp-hostname.service
,openshift-azure-routes.path
etc) can cleanly order after that.This also aids with the coming work for bare metal installs to do OS udpates at install time, because then we will "finalize" the OS update and continue booting.
(cherry picked from commit 2141f4b)