-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
enabling Session affinity goes to a single pod only #3056
Comments
Given that we are using a GCP L4 TCP load balancer, is it possible that the hashing algorithm is usng the IP of the GCP load balancer, instead of the client? Would this explain why it always goes to the same pod? |
For reference, this is how we are installing the helm chart: helm install --namespace nginx --name nginx \
--set rbac.create=true \
--set controller.service.loadBalancerIP=$IP \
--set controller.publishService.enabled=true \
--set controller.stats.enabled=true \
--set controller.service.externalTrafficPolicy=Local \
--set controller.service.type=LoadBalancer \
stable/nginx-ingress If we set the image version back to < 0.18.0, we get load balanced requests. |
@wstrange can you post a minimal Ingress manifest to reproduce this? |
I'm trying to replicate this. It looks like it does not happen with http. Something to do with https / ssl. I'll keep testing. Update: Can't replicate yet with a simple test headers app, even on SSL. Sigh.. |
We've also encountered exactly the same symptoms on 2 seperate occations last two weeks where all our load goes to a single pod when using session affinity. We have not experienced this in versions prior to 0.18.0 from what I remember. Is there some changes done to how session affinity is handled in later versions? I cant seem to see anything about it in the release notes. We are currently also unable to reproduce this so it's hard to find the root cause of it. |
We can not replicate this with a simple echo headers application, but see it in a more complex deployment of our Java application. What is the logic used to calculate the backend pod to steer the session to? This might help us to narrrow down how this happens. |
We have the same problem with a 3 pod deployment and http loadbalancing. Sometimes (not always) one pod does not recieve any http traffic. The traffic is instead sent to one of the remaining pods. We assume this problem is the same here and is introduced by the dynamic-configuration of backends in https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.18.0. Of course this is just a workaround and we would like to solve the underlying issue with the lua balancer. |
We are experiencing the same problem (on 0.19.0 and 0.20.0). Thank you @svenbs for suggesting disabling dynamic configuration. Using We did however experience an issue that the generated cookie is different in the domain with dynamic configuration enabled. With dynamic configuration the domain of the cookie is |
Given that others are seeing this issue, and it seems to be hard to reproduce with a simple echoheaders sample, is there any way that more debug / diagnostic information can be logged to show how the dynamic configuration module arrives at decisions on pod backends. I am guessing that there is some timing issue. i.e. some pod is ready before others, or briefly reports not live, etc. |
* Added workaround for bug kubernetes/ingress-nginx#3056 * Changes to support CLOUD-855. Note the change in location of the keystore and password store. They are not directly picked up from the secrets mounts.
anyone having this issue please try
|
/close |
@ElvinEfendi: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@ElvinEfendi To which tag will this fix be deployed? |
@ElvinEfendi To which tag will this fix be deployed? @nelsonfassis it will be included in |
We are still seeing this issue on 0.21.0 Anyone else? |
Yes, we have the same issue on 0.22.0 as well. |
We're still seeing this on 0.21.0. |
Me too, I see same issue in 0.22.0 |
Can you try the latest version |
@ElvinEfendi After updating to 0.23.0, I do not see this problem for now. Very appreciate for your suggestion. |
@m7luffy you are welcome! In that case most likely the bug was related to #3809 (comment) that got fixed in |
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG
NGINX Ingress controller version:
0.18.0 and 0.19.0
Kubernetes version (use
kubectl version
):1.10.7
Environment:
Using an external GCP TCP load balancer (L4) as the ingress IP.
What happened:
With session affinity enabled, traffic goes to a single pod only.
What you expected to happen:
Multiple requests (e.g. with curl -vk ..) should get sent to a different backend.
How to reproduce it (as minimally and precisely as possible):
Working on a simpler repro...
Anything else we need to know:
The configuration output is below. The service in question is "openam"
The text was updated successfully, but these errors were encountered: