-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory Limit too low results in OOMKilled #268
Comments
When I've installed ibmcloud-operator I've been using a subscription that looks like this: apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: ibmcloud-operator
spec:
channel: stable
name: ibmcloud-operator
source: community-operators
sourceNamespace: openshift-marketplace
config:
resources:
limits:
cpu: 400m
memory: 700Mi
requests:
cpu: 400m
memory: 40Mi This is probably oodles more than the operator itself needs 🤷 . The increased memory consumption in OpenShift when compared against other cluster types might be related to the number of namespaces present in the cluster itself - especially if your installing the operator with cluster-wide scope. |
If either of you are interested, we’re open to PRs for fine-tuned watches. It looks like they would significantly reduce memory usage. |
@JohnStarich, have deleted the cluster in the meantime. I tested it today on totally fresh new cluster, but now the default resource limit is fine and enough regarding the PR, I have currently no clue about the topic :-D |
after applying the first
the above edit (increasing resource limit) helps here |
Yeah, looks like the same issue. Let's focus our conversation in #199. @greglanthier you're welcome to take this on, if you're interested. |
Installation of the operator in OpenShift is not successful, due the fact that the requested memory limit does not met the real memory consumption
the current config results in restart with OOMKilled signal
Changing to 255Mi results in a stable version
Affected version:
IBM Cloud Operator, 1.1.0
The text was updated successfully, but these errors were encountered: