Contour uses more than 5Gib and gets OOM killed #6763
Labels
kind/bug
Categorizes issue or PR as related to a bug.
lifecycle/needs-triage
Indicates that an issue needs to be triaged by a project contributor.
What steps did you take and what happened:
Contour sometime get OOM killed multiple times an hour.
What did you expect to happen:
For it to not get OOM killed.
Anything else you would like to add:
We are seeing Contour is getting OOM killed across multiple cluster, in the beginning we just added more memory to the pod, but now that we have allocated 5Gib to the pod it feels like there is a bigger problem.
Not sure what you need for further debugging, but here is the metrics for cached objects, one strange thing is that it is caching all objects in the cluster instead of only relevant objects.
contour_dag_cache_object{kind="ConfigMap"} 48 contour_dag_cache_object{kind="Gateway"} 1 contour_dag_cache_object{kind="GatewayClass"} 0 contour_dag_cache_object{kind="HTTPProxy"} 169 contour_dag_cache_object{kind="HTTPRoute"} 1 contour_dag_cache_object{kind="Ingress"} 99 contour_dag_cache_object{kind="Namespace"} 48 contour_dag_cache_object{kind="Secret"} 1080 contour_dag_cache_object{kind="Service"} 402
Environment:
kubectl version
): 1.31.2/etc/os-release
): Ubuntu 22.04.4 LTSThe text was updated successfully, but these errors were encountered: