From 3db423746af4176d74927986b0c4a451d63dc5b6 Mon Sep 17 00:00:00 2001 From: Martin Schuppert Date: Fri, 5 Jul 2024 15:42:08 +0200 Subject: [PATCH] Bump memory limits for the operator controller-manager pod Starting OCP 4.16, the current set limits need to be bumped to prevent them getting OOMKilled, e.g.: NAME CPU(cores) MEMORY(bytes) swift-operator-controller-manager-6764c568ff-r4mzl 2m 468Mi telemetry-operator-controller-manager-7c4fd577b4-9nnvq 2m 482Mi The reason for this is probably due to the move to cgroups v2. OCP 4.16 release notes: ~~~ Beginning with OpenShift Container Platform 4.16, Control Groups version 2 (cgroup v2), also known as cgroup2 or cgroupsv2, is enabled by default for all new deployments, even when performance profiles are present.Since OpenShift Container Platform 4.14, cgroups v2 has been the default, but the performance profile feature required the use of cgroups v1. This issue has been resolved. ~~~ Upstream memory increase discussion: https://github.com/kubernetes/kubernetes/issues/118916 While im mentiones its only node memory stats thats are wrong and pod stats are correct, its probably related. Resolves: https://issues.redhat.com/browse/OSPRH-8379 Signed-off-by: Martin Schuppert --- config/manager/manager.yaml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/config/manager/manager.yaml b/config/manager/manager.yaml index c00d1a90..732b3e8a 100644 --- a/config/manager/manager.yaml +++ b/config/manager/manager.yaml @@ -62,9 +62,9 @@ spec: resources: limits: cpu: 500m - memory: 256Mi + memory: 5Gi requests: cpu: 10m - memory: 128Mi + memory: 512Mi serviceAccountName: controller-manager terminationGracePeriodSeconds: 10