Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Customization of NGINX configuration #27

Closed
pleshakov opened this issue May 12, 2016 · 13 comments
Closed

Customization of NGINX configuration #27

pleshakov opened this issue May 12, 2016 · 13 comments
Assignees

Comments

@pleshakov
Copy link
Contributor

Currently, there is no way to customize NGINX configuration other than change the template file and rebuild the image.

Add support for customization of some NGINX parameters, such as proxy_read_timeout, proxy_connect_timeout, client_max_body_size (#21) and others via ConfigMaps

Add the ability to redefine those parameters per Ingress Resource. Can be done via annotations --> #21

@pleshakov
Copy link
Contributor Author

Added in #33

@edouardKaiser
Copy link

I prefer this NGINX Ingress Controller than the one in Kubernetes Contrib (https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx) because this one is using the service Virtual IP instead of maintaining a list of Pods IP addresses.

Unfortunately, we can't use this one because of the lack of configuration options. We would love to be able to configure the log format too for example. Or add a HTTP authentication on the ingress rule.

In the Kubernetes Contrib, they do it through annotations:

  annotations:
    # type of authentication
    ingress.kubernetes.io/auth-type: basic
    # name of the secret that contains the user/password definitions
    ingress.kubernetes.io/auth-secret: http-basic-auth
    # message to display with an appropiate context why the authentication is required
    ingress.kubernetes.io/auth-realm: "Authentication Required"

@pleshakov
Copy link
Contributor Author

Help me understand why do you need to use Virtual IPs? With #33 nginx now uses endpoints rather than virtual IPs. Of course, it can be added back as an optional feature.

Yes, we have it on our roadmap: more configuration options and the basic authentication.

@edouardKaiser
Copy link

Oh, I didn't realize you guys changed that.

Ok, so here is my point:

When we scale down a service. the Ingress controller does not work in harmony with the Replication Controller/Replica Set of the service.

That means, some requests to the Ingress Controller will fail while waiting for the Ingress Controller to be updated.

If we use the Service Virtual IP address, we can let kube-proxy do its job in harmony with the replication controller and we have a seamless down scaling.

@edouardKaiser
Copy link

Don't hesitate if you have more questions, I hope my message was clear.

@pleshakov
Copy link
Contributor Author

I'm not sure if using kube-proxy is the best way to approach this problem.

What kind of errors do you see?
How long is the delay between scaling down a replication controller and those changes propagated to the Ingress Controller?

@edouardKaiser
Copy link

Ok, I'll try to explain with another example, kube-proxy is just the tool setting the IP tables rules, it's not really the point here.

When you update a deployment resource (like changing the docker image), depending your configuration (rollingUpdate strategy, max surge, max unavailable), the deployment controller will bring down some pods, and create new one. All of this, in a fashion way where there is no downtime if you use the Service VIP to communicate with the pods.

Because first, when it wants to kill a pod, it removes the pod IP address from the service to avoid any new connection, and it follow the termination grace period of the pod to drain the existing connections. Meanwhile, it also creates a new pod, with the new docker image, and wait for the pod to be ready, and add the pod behind the service VIP.

By maintaining the pod list yourself in the Ingress Controller, at a certain point, during a deployment resource update, some requests will be redirected to pods which are shutting down. Because the Ingress Controller, does not know a RollingUpdate Deployment is happening. It will know maybe 1 second later about this pod removed. But for services, with a lots of connection/sec, it's potentially a lots of requests failing.

Kubernetes is already doing an amazing job to update pods with no downtime. That only if you use the Service VIP. I don't know what's the delay between the Replication Controller changes and the Ingress Controller picking up those changes, but for services with high traffic, it's already too much.

Did I miss something? If it's still not clear, or there is something I'm clearly not understanding, please don't hesitate.

@edouardKaiser
Copy link

Before you guys made this change, I did some test between your Ingress Controller and the Ingress Controller from Kubernetes Contrib.

I was continuously spamming requests to the Ingress Controller (5/sec). Meanwhile, I updated the Deployment resource related to those requests (new docker images):

  • Kubernetes/Contrib: You can clearly see some requests failing at the time of the update
  • NGINX/Controller: It looks like nothing happened, perfect deployment, with no downtime.

That's why I was pretty happy with your Ingress Controller.

@pleshakov
Copy link
Contributor Author

I see that you also asked this question here -- kubernetes-retired/contrib#1140

How do your pods handle termination? During the grace period, can you drain connections on pods via hooks? This way if NGINX sends a new request to the terminating pod, it will get connection refused error and try the next pod, while maintaining established connections to the pod until they complete.

@edouardKaiser
Copy link

Yes, I started the discussion there, and when I realised you guys were moving in the same direction, I did a "mirror" talk.

At the end of the day, we could always clone one of the repository and make the changes we need, but I'd rather avoid that.

Regarding pod termination, it does vary depending the type of application, there's no real guarantee. This is why I thought it was great that Kubernetes was giving us that guarantee through the orchestration of the deployment resource.

@edouardKaiser
Copy link

Interesting comment from Gorka:

kubernetes-retired/contrib#1140 (comment)

@edouardKaiser
Copy link

edouardKaiser commented Jun 29, 2016

Great explanation from Tim Hockin on the other thread regarding pod lifecycle. Basically, the same issue could happen when using Service VIP.

@pleshakov
Copy link
Contributor Author

thanks @edouardKaiser

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants