You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
On a dual-stack-v6-primary k8s (one that has v6 cluster/service subnets specified first to kubelet), healthchecks for eda-api fail, leading to the pod never being listed as a service backend and eventually being killed.
Our case is RKE2 with this configuration for cluster/service cidrs:
This would also increase v6 support for EDA overall.
However, there may need to be some logic applied to select v4 or v6 in this template - if a k8s is v4 or v6 only for example, or if it's dual stack v4 primary, etc.
The failing healthchecks can be seen to be attempting the v6 address of the pod.
The text was updated successfully, but these errors were encountered:
On a dual-stack-v6-primary k8s (one that has v6 cluster/service subnets specified first to kubelet), healthchecks for eda-api fail, leading to the pod never being listed as a service backend and eventually being killed.
Our case is RKE2 with this configuration for cluster/service cidrs:
I believe this could be fixed by changing
0.0.0.0
to[::]
in this template for the gunicorn and daphne listeners:https://github.com/ansible/eda-server-operator/blob/main/roles/eda/templates/eda-api.deployment.yaml.j2
This would also increase v6 support for EDA overall.
However, there may need to be some logic applied to select v4 or v6 in this template - if a k8s is v4 or v6 only for example, or if it's dual stack v4 primary, etc.
The failing healthchecks can be seen to be attempting the v6 address of the pod.
The text was updated successfully, but these errors were encountered: