-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Customization of NGINX configuration #27
Comments
Added in #33 |
I prefer this NGINX Ingress Controller than the one in Kubernetes Contrib (https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx) because this one is using the service Virtual IP instead of maintaining a list of Pods IP addresses. Unfortunately, we can't use this one because of the lack of configuration options. We would love to be able to configure the log format too for example. Or add a HTTP authentication on the ingress rule. In the Kubernetes Contrib, they do it through annotations:
|
Help me understand why do you need to use Virtual IPs? With #33 nginx now uses endpoints rather than virtual IPs. Of course, it can be added back as an optional feature. Yes, we have it on our roadmap: more configuration options and the basic authentication. |
Oh, I didn't realize you guys changed that. Ok, so here is my point: When we scale down a service. the Ingress controller does not work in harmony with the Replication Controller/Replica Set of the service. That means, some requests to the Ingress Controller will fail while waiting for the Ingress Controller to be updated. If we use the Service Virtual IP address, we can let kube-proxy do its job in harmony with the replication controller and we have a seamless down scaling. |
Don't hesitate if you have more questions, I hope my message was clear. |
I'm not sure if using kube-proxy is the best way to approach this problem. What kind of errors do you see? |
Ok, I'll try to explain with another example, kube-proxy is just the tool setting the IP tables rules, it's not really the point here. When you update a deployment resource (like changing the docker image), depending your configuration (rollingUpdate strategy, max surge, max unavailable), the deployment controller will bring down some pods, and create new one. All of this, in a fashion way where there is no downtime if you use the Service VIP to communicate with the pods. Because first, when it wants to kill a pod, it removes the pod IP address from the service to avoid any new connection, and it follow the termination grace period of the pod to drain the existing connections. Meanwhile, it also creates a new pod, with the new docker image, and wait for the pod to be ready, and add the pod behind the service VIP. By maintaining the pod list yourself in the Ingress Controller, at a certain point, during a deployment resource update, some requests will be redirected to pods which are shutting down. Because the Ingress Controller, does not know a RollingUpdate Deployment is happening. It will know maybe 1 second later about this pod removed. But for services, with a lots of connection/sec, it's potentially a lots of requests failing. Kubernetes is already doing an amazing job to update pods with no downtime. That only if you use the Service VIP. I don't know what's the delay between the Replication Controller changes and the Ingress Controller picking up those changes, but for services with high traffic, it's already too much. Did I miss something? If it's still not clear, or there is something I'm clearly not understanding, please don't hesitate. |
Before you guys made this change, I did some test between your Ingress Controller and the Ingress Controller from Kubernetes Contrib. I was continuously spamming requests to the Ingress Controller (5/sec). Meanwhile, I updated the Deployment resource related to those requests (new docker images):
That's why I was pretty happy with your Ingress Controller. |
I see that you also asked this question here -- kubernetes-retired/contrib#1140 How do your pods handle termination? During the grace period, can you drain connections on pods via hooks? This way if NGINX sends a new request to the terminating pod, it will get connection refused error and try the next pod, while maintaining established connections to the pod until they complete. |
Yes, I started the discussion there, and when I realised you guys were moving in the same direction, I did a "mirror" talk. At the end of the day, we could always clone one of the repository and make the changes we need, but I'd rather avoid that. Regarding pod termination, it does vary depending the type of application, there's no real guarantee. This is why I thought it was great that Kubernetes was giving us that guarantee through the orchestration of the deployment resource. |
Interesting comment from Gorka: |
Great explanation from Tim Hockin on the other thread regarding pod lifecycle. Basically, the same issue could happen when using Service VIP. |
thanks @edouardKaiser |
Currently, there is no way to customize NGINX configuration other than change the template file and rebuild the image.
Add support for customization of some NGINX parameters, such as
proxy_read_timeout
,proxy_connect_timeout
,client_max_body_size
(#21) and others via ConfigMapsAdd the ability to redefine those parameters per Ingress Resource. Can be done via annotations --> #21
The text was updated successfully, but these errors were encountered: