Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

delete upstream healthcheck annotation #3207

Merged
merged 1 commit into from
Oct 9, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 0 additions & 44 deletions docs/examples/customization/custom-upstream-check/README.md

This file was deleted.

25 changes: 0 additions & 25 deletions docs/user-guide/nginx-configuration/annotations.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,8 +65,6 @@ You can add these Kubernetes annotations to specific Ingress objects to customiz
|[nginx.ingress.kubernetes.io/session-cookie-hash](#cookie-affinity)|string|
|[nginx.ingress.kubernetes.io/ssl-redirect](#server-side-https-enforcement-through-redirect)|"true" or "false"|
|[nginx.ingress.kubernetes.io/ssl-passthrough](#ssl-passthrough)|"true" or "false"|
|[nginx.ingress.kubernetes.io/upstream-max-fails](#custom-nginx-upstream-checks)|number|
|[nginx.ingress.kubernetes.io/upstream-fail-timeout](#custom-nginx-upstream-checks)|number|
|[nginx.ingress.kubernetes.io/upstream-hash-by](#custom-nginx-upstream-hashing)|string|
|[nginx.ingress.kubernetes.io/load-balance](#custom-nginx-load-balancing)|string|
|[nginx.ingress.kubernetes.io/upstream-vhost](#custom-nginx-upstream-vhost)|string|
Expand Down Expand Up @@ -149,29 +147,6 @@ nginx.ingress.kubernetes.io/auth-realm: "realm string"
!!! example
Please check the [auth](../../examples/auth/basic/README.md) example.

### Custom NGINX upstream checks

NGINX exposes some flags in the [upstream configuration](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream) that enable the configuration of each server in the upstream. The Ingress controller allows custom `max_fails` and `fail_timeout` parameters in a global context using `upstream-max-fails` and `upstream-fail-timeout` in the NGINX ConfigMap or in a particular Ingress rule. `upstream-max-fails` defaults to 0. This means NGINX will respect the container's `readinessProbe` if it is defined. If there is no probe and no values for `upstream-max-fails` NGINX will continue to send traffic to the container.


!!! tip
With the default configuration NGINX will not health check your backends. Whenever the endpoints controller notices a readiness probe failure, that pod's IP will be removed from the list of endpoints. This will trigger the NGINX controller to also remove it from the upstreams.**

To use custom values in an Ingress rule define these annotations:

`nginx.ingress.kubernetes.io/upstream-max-fails`: number of unsuccessful attempts to communicate with the server that should occur in the duration set by the `upstream-fail-timeout` parameter to consider the server unavailable.

`nginx.ingress.kubernetes.io/upstream-fail-timeout`: time in seconds during which the specified number of unsuccessful attempts to communicate with the server should occur to consider the server unavailable. This is also the period of time the server will be considered unavailable.

In NGINX, backend server pools are called "[upstreams](http://nginx.org/en/docs/http/ngx_http_upstream_module.html)". Each upstream contains the endpoints for a service. An upstream is created for each service that has Ingress rules defined.

!!! attention
All Ingress rules using the same service will use the same upstream.
Only one of the Ingress rules should define annotations to configure the upstream servers.

!!! example
Please check the [custom upstream check](../../examples/customization/custom-upstream-check/README.md) example.

### Custom NGINX upstream hashing

NGINX supports load balancing by client-server mapping based on [consistent hashing](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#hash) for a given key. The key can contain text, variables or any combination thereof. This feature allows for request stickiness other than client IP or cookies. The [ketama](http://www.last.fm/user/RJ/journal/2007/04/10/392555/) consistent hashing method will be used which ensures only a few keys would be remapped to different servers on upstream group changes.
Expand Down
3 changes: 0 additions & 3 deletions internal/ingress/annotations/annotations.go
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,6 @@ import (
"k8s.io/ingress-nginx/internal/ingress/annotations/connection"
"k8s.io/ingress-nginx/internal/ingress/annotations/cors"
"k8s.io/ingress-nginx/internal/ingress/annotations/defaultbackend"
"k8s.io/ingress-nginx/internal/ingress/annotations/healthcheck"
"k8s.io/ingress-nginx/internal/ingress/annotations/influxdb"
"k8s.io/ingress-nginx/internal/ingress/annotations/ipwhitelist"
"k8s.io/ingress-nginx/internal/ingress/annotations/loadbalancing"
Expand Down Expand Up @@ -76,7 +75,6 @@ type Ingress struct {
DefaultBackend *apiv1.Service
Denied error
ExternalAuth authreq.Config
HealthCheck healthcheck.Config
Proxy proxy.Config
RateLimit ratelimit.Config
Redirect redirect.Config
Expand Down Expand Up @@ -116,7 +114,6 @@ func NewAnnotationExtractor(cfg resolver.Resolver) Extractor {
"CorsConfig": cors.NewParser(cfg),
"DefaultBackend": defaultbackend.NewParser(cfg),
"ExternalAuth": authreq.NewParser(cfg),
"HealthCheck": healthcheck.NewParser(cfg),
"Proxy": proxy.NewParser(cfg),
"RateLimit": ratelimit.NewParser(cfg),
"Redirect": redirect.NewParser(cfg),
Expand Down
32 changes: 0 additions & 32 deletions internal/ingress/annotations/annotations_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,6 @@ import (

var (
annotationSecureVerifyCACert = parser.GetAnnotationWithPrefix("secure-verify-ca-secret")
annotationUpsMaxFails = parser.GetAnnotationWithPrefix("upstream-max-fails")
annotationUpsFailTimeout = parser.GetAnnotationWithPrefix("upstream-fail-timeout")
annotationPassthrough = parser.GetAnnotationWithPrefix("ssl-passthrough")
annotationAffinityType = parser.GetAnnotationWithPrefix("affinity")
annotationCorsEnabled = parser.GetAnnotationWithPrefix("enable-cors")
Expand Down Expand Up @@ -146,36 +144,6 @@ func TestSecureVerifyCACert(t *testing.T) {
}
}

func TestHealthCheck(t *testing.T) {
ec := NewAnnotationExtractor(mockCfg{})
ing := buildIngress()

fooAnns := []struct {
annotations map[string]string
eumf int
euft int
}{
{map[string]string{annotationUpsMaxFails: "3", annotationUpsFailTimeout: "10"}, 3, 10},
{map[string]string{annotationUpsMaxFails: "3"}, 3, 0},
{map[string]string{annotationUpsFailTimeout: "10"}, 0, 10},
{map[string]string{}, 0, 0},
{nil, 0, 0},
}

for _, foo := range fooAnns {
ing.SetAnnotations(foo.annotations)
r := ec.Extract(ing).HealthCheck

if r.FailTimeout != foo.euft {
t.Errorf("Returned %d but expected %d for FailTimeout", r.FailTimeout, foo.euft)
}

if r.MaxFails != foo.eumf {
t.Errorf("Returned %d but expected %d for MaxFails", r.MaxFails, foo.eumf)
}
}
}

func TestSSLPassthrough(t *testing.T) {
ec := NewAnnotationExtractor(mockCfg{})
ing := buildIngress()
Expand Down
61 changes: 0 additions & 61 deletions internal/ingress/annotations/healthcheck/main.go

This file was deleted.

95 changes: 0 additions & 95 deletions internal/ingress/annotations/healthcheck/main_test.go

This file was deleted.

1 change: 0 additions & 1 deletion internal/ingress/annotations/proxy/main_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,6 @@ type mockBackend struct {

func (m mockBackend) GetDefaultBackend() defaults.Backend {
return defaults.Backend{
UpstreamFailTimeout: 1,
ProxyConnectTimeout: 10,
ProxySendTimeout: 15,
ProxyReadTimeout: 20,
Expand Down
19 changes: 8 additions & 11 deletions internal/ingress/controller/controller.go
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,6 @@ import (
clientset "k8s.io/client-go/kubernetes"

"k8s.io/ingress-nginx/internal/ingress"
"k8s.io/ingress-nginx/internal/ingress/annotations/healthcheck"
"k8s.io/ingress-nginx/internal/ingress/annotations/proxy"
ngx_config "k8s.io/ingress-nginx/internal/ingress/controller/config"
"k8s.io/ingress-nginx/internal/k8s"
Expand Down Expand Up @@ -237,7 +236,7 @@ func (n *NGINXController) getDefaultUpstream() *ingress.Backend {
return upstream
}

endps := getEndpoints(svc, &svc.Spec.Ports[0], apiv1.ProtocolTCP, &healthcheck.Config{}, n.store.GetServiceEndpoints)
endps := getEndpoints(svc, &svc.Spec.Ports[0], apiv1.ProtocolTCP, n.store.GetServiceEndpoints)
if len(endps) == 0 {
glog.Warningf("Service %q does not have any active Endpoint", svcKey)
endps = []ingress.Endpoint{n.DefaultEndpoint()}
Expand Down Expand Up @@ -434,7 +433,7 @@ func (n *NGINXController) getBackendServers(ingresses []*extensions.Ingress) ([]
// check if the location contains endpoints and a custom default backend
if location.DefaultBackend != nil {
sp := location.DefaultBackend.Spec.Ports[0]
endps := getEndpoints(location.DefaultBackend, &sp, apiv1.ProtocolTCP, &healthcheck.Config{}, n.store.GetServiceEndpoints)
endps := getEndpoints(location.DefaultBackend, &sp, apiv1.ProtocolTCP, n.store.GetServiceEndpoints)
if len(endps) > 0 {
glog.V(3).Infof("Using custom default backend for location %q in server %q (Service \"%v/%v\")",
location.Path, server.Hostname, location.DefaultBackend.Namespace, location.DefaultBackend.Name)
Expand Down Expand Up @@ -544,7 +543,7 @@ func (n *NGINXController) createUpstreams(data []*extensions.Ingress, du *ingres
}

if len(upstreams[defBackend].Endpoints) == 0 {
endps, err := n.serviceEndpoints(svcKey, ing.Spec.Backend.ServicePort.String(), &anns.HealthCheck)
endps, err := n.serviceEndpoints(svcKey, ing.Spec.Backend.ServicePort.String())
upstreams[defBackend].Endpoints = append(upstreams[defBackend].Endpoints, endps...)
if err != nil {
glog.Warningf("Error creating upstream %q: %v", defBackend, err)
Expand Down Expand Up @@ -597,7 +596,7 @@ func (n *NGINXController) createUpstreams(data []*extensions.Ingress, du *ingres
}

if len(upstreams[name].Endpoints) == 0 {
endp, err := n.serviceEndpoints(svcKey, path.Backend.ServicePort.String(), &anns.HealthCheck)
endp, err := n.serviceEndpoints(svcKey, path.Backend.ServicePort.String())
if err != nil {
glog.Warningf("Error obtaining Endpoints for Service %q: %v", svcKey, err)
continue
Expand Down Expand Up @@ -654,10 +653,8 @@ func (n *NGINXController) getServiceClusterEndpoint(svcKey string, backend *exte
return endpoint, err
}

// serviceEndpoints returns the upstream servers (Endpoints) associated with a
// Service.
func (n *NGINXController) serviceEndpoints(svcKey, backendPort string,
hz *healthcheck.Config) ([]ingress.Endpoint, error) {
// serviceEndpoints returns the upstream servers (Endpoints) associated with a Service.
func (n *NGINXController) serviceEndpoints(svcKey, backendPort string) ([]ingress.Endpoint, error) {
svc, err := n.store.GetService(svcKey)

var upstreams []ingress.Endpoint
Expand All @@ -672,7 +669,7 @@ func (n *NGINXController) serviceEndpoints(svcKey, backendPort string,
servicePort.TargetPort.String() == backendPort ||
servicePort.Name == backendPort {

endps := getEndpoints(svc, &servicePort, apiv1.ProtocolTCP, hz, n.store.GetServiceEndpoints)
endps := getEndpoints(svc, &servicePort, apiv1.ProtocolTCP, n.store.GetServiceEndpoints)
if len(endps) == 0 {
glog.Warningf("Service %q does not have any active Endpoint.", svcKey)
}
Expand Down Expand Up @@ -706,7 +703,7 @@ func (n *NGINXController) serviceEndpoints(svcKey, backendPort string,
Port: int32(externalPort),
TargetPort: intstr.FromString(backendPort),
}
endps := getEndpoints(svc, &servicePort, apiv1.ProtocolTCP, hz, n.store.GetServiceEndpoints)
endps := getEndpoints(svc, &servicePort, apiv1.ProtocolTCP, n.store.GetServiceEndpoints)
if len(endps) == 0 {
glog.Warningf("Service %q does not have any active Endpoint.", svcKey)
return upstreams, nil
Expand Down
Loading