-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move cgroup v1 support into maintenance mode #4572
Conversation
harche
commented
Apr 5, 2024
- One-line PR description: Deprecate cgroup v1 support
- Issue link: Move cgroup v1 support into maintenance mode #4569
- Other comments:
18810db
to
b65ea19
Compare
```golang | ||
if m.memorySwapBehavior == kubelettypes.LimitedSwap { | ||
if !isCgroup2UnifiedMode() && utilfeature.DefaultFeatureGate.Enabled(features.DeprecatedCgroupV1) { | ||
klog.Warning("cgroup v1 support has been deprecated, please plan for the migration towards cgroup v2") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would maybe an event or metric be easier for a cluster admin to react to?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 for metric.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, I will update the KEP.
1. Monitor existing cgroup v2 jobs, e.g. | ||
- https://testgrid.k8s.io/sig-node-release-blocking#ci-crio-cgroupv2-node-e2e-conformance | ||
|
||
2. Migrate existing e2e and node e2e jobs that still use cgroup v1 to cgroup v2. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I noticed that COS and ubuntu images require some special filtering for a cgroupv2 image.
https://github.com/kubernetes/test-infra/blob/master/jobs/e2e_node/swap/image-config-swap.yaml
During a sig-node meeting, I think we discussed maybe making the default images just be cgroup v2.
Revised Code Snippet: | ||
```golang | ||
memLimitFile = "memory.max" | ||
if libcontainercgroups.!IsCgroup2UnifiedMode() && utilfeature.DefaultFeatureGate.Enabled(features.DeprecatedCgroupV1) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we go with a kubelet flag than I think we would need both kubelet flag and feature gate?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, you are right, missed that part. Thanks, I will update it.
|
||
As this feature progresses to beta, the default behavior of the kubelet will evolve to reflect the urgency of the migration. Specifically, the kubelet will, by default, refuse to start on hosts still utilizing cgroup v1, thereby directly encouraging users to migrate to cgroup v2 to continue leveraging Kubernetes' full capabilities. | ||
|
||
To balance this stricter enforcement with the need for operational flexibility, the `--suppress-cgroupv1-deprecation-warning` flag introduced during the `alpha` stage will retain its functionality but will be enhanced to allow the kubelet to start on cgroup v1 hosts. Users who require additional time for migration, due to logistical or technical constraints, can explicitly set this flag as a temporary measure to bypass the startup restriction while they complete their transition to cgroup v2. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems to be that maybe we should have two flags.
--fail-cgroup-v1, --suppress-cgroupv1-deprecation-warning?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in Beta
the flag --suppress-cgroupv1-deprecation-warning
can be interpreted as "ignore the cgroup v1 warning and start the kubelet anyway".
I would like to keep this as simple as possible. More flags means more things to remember for the user, which might lead to bad UX.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, but having --suppress-cgroupv1-deprecation-warning
to really mean that we are going to fail to start on kubelet is also not great UX.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should think of a better name for that flag then. 2 new flags just for the deprecation is little overkill, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
--allow-cgroupv1-operations
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how about --override-cgroupv1-deprecation
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tbh from my perspective I don't think we need a flag to set a failure. If we eventually plan to fail on cgroupv1, we could use the feature gate as a gate on whether to fail or not, and then independently use the flag to toggle whether we print a warning
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I agree.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can also see the argument that maybe we shouldn't let people turn off deprecation notices.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
log spam could lead to higher resource utilization.
extending the production code to implement this enhancement. | ||
--> | ||
|
||
The respective kubelet subcomponents already have unit tests cases to handle cgroup v2. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We probably would want unit tests covering the feature flag and flags.
Defaulting logic would also be important based on feature-gate/flags.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated. thanks.
95e7c7c
to
b89793a
Compare
|
||
- Announce the deprecation of cgroup v1 support in Kubernetes. | ||
- Provide clear migration paths for users running Kubernetes on systems using cgroup v1. | ||
- Ensure test coverage |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I think we should drop this from a goal.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wanted to highlight that before we move to beta and enable that feature gate by default, we have to make sure that all jobs are migrated to cgroup v2. maybe there is a better way to say that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would say that then. ;)
- Ensure test coverage | |
- Ensure that all CI jobs are migrated to cgroupv2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we migrate all jobs to cgroup v2 while this KEP is just beta, wouldn't we lose all test coverage for v1? We should still have both test coverage until we drop the support.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Ensure test coverage | |
- Ensure all features coverage by running all tests on cgroup v2 (while some may still run on cgroup v1 to test back compatibility) |
|
||
During the `alpha` stage of this feature's lifecycle, it's anticipated that users will require time to plan and execute their migrations to cgroup v2. Factors influencing the migration timeline include, but are not limited to, selecting compatible kernels and operating systems, planning for potential downtimes, and coordinating changes to minimize disruption during peak operational periods. | ||
|
||
To accommodate this, we propose introducing a kubelet flag, `--suppress-cgroupv1-deprecation-warning`, which is not set by default. This flag provides users the flexibility to suppress repetitive deprecation warning logs and events related to cgroup v1 while they are in the process of planning and executing their migration to cgroup v2. The introduction of this flag is intended to reduce log spam and operational noise, acknowledging the non-trivial nature of the migration for many users. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not convinced that we want to go this route..
Do we really want to encourage people to turn off a deprecation? We are warning them because we plan to make this a fail case. If we are worried about log spam, we could always maybe it a V(3) or above log.
Maybe its something we can leave open for discussion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of turning off completely, changing the log level sounds good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other option could be just log a warning and event only on kubelet startup. That way we don't have to worry about log spam.
@haircommander @kannon92 WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and we don't have to add any extra kubelet arguments to suppress the logs either.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Something that doesn't change its value during while the kubelet is up and running, there is no need to bother the user over and over about it, right? We come across like nagging telemarketers if we do that. ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea I think logging once on startup and don't allow one to turn it off is a good idea!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what Kevin says 👍 , we can not add a temporary flag that we'll need to remove later because it will be a noop once cgroupsv1 was completely deprecated
Upon reaching the `beta` stage, the kubelet's approach towards cgroup version enforcement will be adjusted to underline the importance of transitioning to cgroup v2. By default, kubelet will not start on systems that are still using cgroup v1. This change acts as a strong nudge for users to migrate to cgroup v2, ensuring they can utilize the enhanced features and capabilities of Kubernetes without compromise. Users who wish to continue with cgroup v1 need to explicitly opt-out by disabling the `DeprecatedCgroupV1` feature flag. | ||
|
||
Recognizing the necessity for flexibility during this transition, the `--suppress-cgroupv1-deprecation-warning` flag, introduced in the `alpha` phase, will continue to be available. This ensures that users who are in the process of migrating or have specific reasons to delay can still suppress deprecation warnings and maintain operational stability. | ||
### API Changes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are adding kubelet flags which is a new API type.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the flag really necessary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it is not necessary, removed it.
|
||
## Motivation | ||
|
||
cgroup v2 offers significant improvements over cgroup v1 in terms of feature set, uniformity of interface, and scalability. With major Linux distributions and container runtimes supporting cgroup v2, it is timely for Kubernetes to deprecate cgroup v1 support to streamline maintenance and encourage the adoption of newer, more secure, and more performant technologies. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit, maybe we should mention https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2254-cgroup-v2/README.md.
|
||
The proposal outlines a plan to deprecate cgroup v1 support in Kubernetes, encouraging the community and users to transition to cgroup v2. | ||
|
||
### Risks and Mitigations |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we have a section here to state that many cloud vendors are defaulting to cgroupv2? Or at least mention that all the cloud vendors support cgroupv2 now?
Openshift has GA cgroupv2 in 4.14.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure if k8s should make vendor specific references. Our reasons to deprecate v1 are, v2 is better and v1 support from the things k8s depends up like systemd, kernel etc is disappearing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The case for posting it is to show that we are deprecating this and cgroupv2 is heavily supported in the popular environments. I think we are ready to do that but wasn't sure if we should mention that many of the hyperscalers default settings are cgroup v2.
I'm happy to be wrong on this one if you think its not necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Kubernetes targets non-cloud environments not just cloud, we should be talking about the lack of support in the ecosystem with evidence and why it makes sense to drop v1 support.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Many of the users on the public clouds are still on cgroupv1 though.
Even if applying deprecation policies, they may still surprise some users. | ||
--> | ||
|
||
No. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a removal of cgroupv1.
|
||
## Summary | ||
|
||
Deprecate the support for cgroup v1 in Kubernetes, aligning with the industry's move towards cgroup v2 as the default for Linux kernel resource management and isolation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Deprecate and remove support for
...
We really want to remove support for cgroupsv1 at some point.
|
||
### Goals | ||
|
||
- Deprecation of cgroup v1 support in Kubernetes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Deprecation and removal of cgroup v1 support in Kubernetes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we need a plan for removal outlined here as well.
/assign |
Starting from 1.31, during kubelet startup if the host is running on cgroup v1, kubelet will log a warning message like, | ||
|
||
```golang | ||
klog.Warning("cgroup v1 support has been transitioned into maintenance mode, please plan for the migration towards cgroup v2") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should probably be more explicit about the fact that the host is using v1, instead of expecting the user to be aware of this and understand that the log applies to them. Even if we don't log it on v2 hosts, users will not likely be comparing logs and may only be using cgroup v1 hosts without knowing it.
klog.Warning("cgroup v1 support has been transitioned into maintenance mode, please plan for the migration towards cgroup v2") | |
klog.Warning("cgroup v1 detected. cgroup v1 support has been transitioned into maintenance mode, please plan for the migration towards cgroup v2. More information at https://git.k8s.io/enhancements/keps/sig-node/4569-cgroup-v1-maintenance-mode") |
Revised Code Snippet: | ||
```golang | ||
memLimitFile := "memory.max" | ||
if libcontainercgroups.!IsCgroup2UnifiedMode() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit but ... this isn't valid code?
if libcontainercgroups.!IsCgroup2UnifiedMode() { | |
if !libcontainercgroups.IsCgroup2UnifiedMode() { |
I think that underscores a risk with this KEP: We're going to churn a lot of code and potentially introduce bugs. We should be careful reviewing these changes.
That's probably inevitable unless we maintain v1 support indefinitely though.
implementing this enhancement to ensure the enhancements have also solid foundations. | ||
--> | ||
|
||
All existing test jobs that use cgroup v2 should continue to pass without any flakiness. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm... and cgroup v1 ... because we're going to be moving around a lot of kubelet code and could break v1 as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think "without any flakiness" may be too much. Can we just mention that they should be stable?
extending the production code to implement this enhancement. | ||
--> | ||
|
||
The respective kubelet subcomponents already have unit tests cases to handle cgroup v2. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(and v1?)
We expect no non-infra related flakes in the last month as a GA graduation criteria. | ||
--> | ||
|
||
1. Monitor existing cgroup v2 jobs, e.g. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again, and v1, at least until we actually remove it. We also really need to have more coverage than one cri-o job if this is going to be the only supported option going forward.
--> | ||
|
||
- For clusters upgrading to a version of Kubernetes where cgroup v1 is in maintenance mode, administrators should ensure that all nodes are compatible with cgroup v2 prior to upgrading. This might include operating system upgrades or workload configuration changes. | ||
- Downgrading to a version that supports cgroup v1 should not require any special considerations regarding cgroup version, as cgroup v2 is backwards compatible with cgroup v1 from a Kubernetes perspective. However, specific node configurations or workload resource requirements may necessitate adjustments. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't quite true as with the current plans we may have cgroup-v2-only default-on functionality that is lost when downgrading?
We should probably recommend that people migrate to cgroup v2 before upgrading Kubernetes? (and they can roll back the kernel/host change)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good point. Some kubelet features like swap support only work with v2, so we need to call that out. Thanks for pointing that out. I will update that text.
|
||
###### How can someone using this feature know that it is working for their instance? | ||
|
||
A warning log messages as well as an event will be emitted about cgroup v1 maintenance mode when the hosts are still using cgroup v1 from 1.31 onwards. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit:
A warning log messages as well as an event will be emitted about cgroup v1 maintenance mode when the hosts are still using cgroup v1 from 1.31 onwards. | |
Warning logs as well as an event will be emitted about cgroup v1 maintenance mode when the hosts are still using cgroup v1 from 1.31 onwards. |
("A" is in conflict with plural "messages" here, we should pick either singular or plural)
logs or events for this purpose. | ||
--> | ||
|
||
Operators can use `kubelet_cgroup_version` metric to determine the version of the cgroup on the cluster hosts. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We might also need metrics for any cgroup v2-only features so they know these will be unavailable when reverting to v1?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure how to do that though. Each feature kinda decides on it's own if it will work on v2 or not.
|
||
cgroup v2 offers significant improvements over cgroup v1 in terms of feature set, uniformity of interface, and scalability. With major Linux distributions and container runtimes increasingly supporting cgroup v2, Kubernetes aims to encourage the adoption of this newer, more secure, and more performant technology. By transitioning cgroup v1 support to maintenance mode, Kubernetes can ensure stability for existing deployments while simultaneously promoting the gradual adoption of cgroup v2. | ||
|
||
### Goals |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we should do this. maybe one release before actually deleting support instead of only announcing it, but we aren't even currently planning to drop support entirely within the scope of this KEP now anyhow?
Something to consider for a future KEP to drop support, I think?
## Motivation | ||
|
||
cgroup v2 offers significant improvements over cgroup v1 in terms of feature set, uniformity of interface, and scalability. With major Linux distributions and container runtimes increasingly supporting cgroup v2, Kubernetes aims to encourage the adoption of this newer, more secure, and more performant technology. By transitioning cgroup v1 support to maintenance mode, Kubernetes can ensure stability for existing deployments while simultaneously promoting the gradual adoption of cgroup v2. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I think we should describe more how the pressure is coming from the ecosystem/our dependencies (linux,systemd,...), if anyone wants long term cgroup v1 support then ... this KEP is far from the only reason that isn't happening, it's a reaction to the reality that the ecosystem is moving on.
It would be good for us to call that out very explicitly.
@mrunalp @BenTheElder thanks for your feedback. I have updated the KEP. |
```golang | ||
eventRecorder.Event(pod, v1.EventTypeWarning, "CgroupV1", fmt.Sprint("cgroup v1 detected. cgroup v1 support has been transitioned into maintenance mode, please plan for the migration towards cgroup v2. More information at https://git.k8s.io/enhancements/keps/sig-node/4569-cgroup-v1-maintenance-mode")) | ||
``` | ||
#### Introduce a kubelet flag to disable cgroup v1 support |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: this is ####
where the above ones are ###
did you mean for this to be a subsection of the above?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good catch.
A new boolean kubelet flag, `--disable-cgroupv1-support`, will be introduced. By default, this flag will be set to `false` to ensure users can continue to use cgroup v1 without any issues. The primary objective of introducing this flag is to set it to `true` in CI, ensuring that all blocking and new CI jobs use only cgroup v2 by default (unless the job explicitly wants to run on cgroup v1). | ||
|
||
|
||
#### Code modifications for default cgroup assumptions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good catch, updating.
#### Alpha | ||
|
||
N/A | ||
|
||
#### Beta | ||
|
||
N/A |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
instead of N/A here, can you explicitly say something like "this feature won't follow the normal cycle of alpha->beta->GA, and will instead be all implemented in GA"?
logs or events for this purpose. | ||
--> | ||
|
||
Operators can use `kubelet_cgroup_version` metric to determine the version of the cgroup on the cluster hosts. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can't they also use the event? that may have better visibility
couple more notes, but otherwise looks pretty good |
/lgtm |
/cc @deads2k for PRR approval. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
/cc @prod-readiness-approvers |
Signed-off-by: Harshal Patil <[email protected]>
|
||
5. **Migration Support**: Provide clear and practical migration guidance for users using cgroup v1, facilitating a smoother transition to cgroup v2. | ||
|
||
6. **Enhancing cgroup v2 Support**: Address all known pending bugs in Kubernetes’ cgroup v2 support to ensure it reaches a level of reliability and functionality that encourages users to transition from cgroup v1. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see a direct discussion of this. Have we met this criteria? Can we get a link to the cgroups v2 test grid?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added that line to address the concern raised in this comment #4572 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Most likely that particular issue may not be directly related to the k8s, opencontainers/runc#3933 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest working sig-node leads to be sure all known problems have been resolved and shipped in a release to ensure all the fixes work prior to switching to maintenance mode for cgroups v1.
PRR looks good. /approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: deads2k, harche, mrunalp, SergeyKanzhelev The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/lgtm |