You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Julius at LEAP reported seeing a popup associated with loosing connection between JupyterLab in the browser and the jupyter server in https://2i2c.freshdesk.com/a/tickets/528.
Was it the jupyterhub chart's proxy pod being evicted/restarted? No.
Was it the ingress controller pod being evicted/restarted? Yes, it was evicted I think!
Why was the ingress controller pod evicted and a new pod started? I'm quite sure it was evicted from a node because memory pressure, because I see a node-exporter daemonset pod that can't be evicted restarted because it was OOMKilled just a minute before a new ingress controller pod was started on another node.
We can avoid this by having better memory requests for the ingress proxy pod, and by having multiple replicas of the ingress controller pods. Having these highly available and reliably running is very relevant.
Related
Our ingress-nginx chart configuration doesn't specify any cpu or memory requests.
## Define requests resources to avoid probe issues due to CPU utilization in busy nodes## ref: https://github.com/kubernetes/ingress-nginx/issues/4735#issuecomment-551204903## Ideally, there should be no limits.## https://engineering.indeedblog.com/blog/2019/12/cpu-throttling-regression-fix/resources:
## limits:## cpu: 100m## memory: 90Mirequests:
cpu: 100mmemory: 90Mi
Looking at LEAP, 2i2c, utoronto, I see that a range of memory use, between 104-131 Mi. I think Doubling the requests makes sense, to arrive at 180Mi. For a bit more margin lets make it closer to a doubling of what we've observed so far - 250Mi.
kubectl top pod -n support -l app.kubernetes.io/name=ingress-nginx
NAME CPU(cores) MEMORY(bytes)
support-ingress-nginx-controller-6585f58669-9zms5 8m 131Mi
Julius at LEAP reported seeing a popup associated with loosing connection between JupyterLab in the browser and the jupyter server in https://2i2c.freshdesk.com/a/tickets/528.
The popup stems from jupyterlab beliving the connection is lost to the user server. I don't think this relates to Julius internet connection, but a disruption of networking for some reason.
Why was the ingress controller pod evicted and a new pod started? I'm quite sure it was evicted from a node because memory pressure, because I see a node-exporter daemonset pod that can't be evicted restarted because it was OOMKilled just a minute before a new ingress controller pod was started on another node.
We can avoid this by having better memory requests for the ingress proxy pod, and by having multiple replicas of the ingress controller pods. Having these highly available and reliably running is very relevant.
Related
Our
ingress-nginx
chart configuration doesn't specify any cpu or memory requests.infrastructure/helm-charts/support/values.yaml
Lines 17 to 36 in 26691bf
The default values for
ingress-nginx
are these:Action points
The text was updated successfully, but these errors were encountered: