Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes: backends not updated when i scale replication controller? #448

Closed
jonaz opened this issue Jun 9, 2016 · 5 comments · Fixed by #477
Closed

kubernetes: backends not updated when i scale replication controller? #448

jonaz opened this issue Jun 9, 2016 · 5 comments · Fixed by #477

Comments

@jonaz
Copy link
Contributor

jonaz commented Jun 9, 2016

Im running: containous/traefik:v1.0.0-rc2

If i scale a rc up or down: kubectl scale rc --replicas=4 app

The backends in traefik does not get updated immediately the the new IPs from the pods.
If i try accessing service it seems to fetch the new config after a few minutes. Im not really sure. But it should be instant. Or proxy to the service IP and let the service balance it?

Ingress looks like this:

kubectl get ingress app-ingress -o yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  creationTimestamp: 2016-05-19T12:47:24Z
  generation: 1
  name: app-ingress
  namespace: default
  resourceVersion: "1902110"
  selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/app-ingress
  uid: d61148e3-1dbf-11e6-b22b-005056880f6d
spec:
  rules:
  - host: app.domain.com
    http:
      paths:
      - backend:
          serviceName: app
          servicePort: 8080
        path: /
status:
  loadBalancer: {}

service looks like this:

kubectl get svc app -o yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2016-05-19T12:47:08Z
  name: app
  namespace: default
  resourceVersion: "1902061"
  selfLink: /api/v1/namespaces/default/services/app
  uid: cc2c9d93-1dbf-11e6-b22b-005056880f6d
spec:
  clusterIP: 10.3.130.87
  ports:
  - name: http
    port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: app
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

@jonaz
Copy link
Contributor Author

jonaz commented Jun 9, 2016

I think this is related to #449 since i get "Last kubernetes config received less than 2s, waiting..." and it keeps repeating it self resulting in the config not getting updated.

@errm
Copy link
Contributor

errm commented Jun 20, 2016

sounds like this is caused by #449.

@errm
Copy link
Contributor

errm commented Jun 20, 2016

@emilevauge I am thinking that even in the case that the provider is spamming updates at the server, the correct behaviour would be to update the configuration every 2 seconds? If this is not the case it seems like there might be a bug somewhere in Server#listenProviders ?

@emilevauge
Copy link
Member

@errm, the property ProvidersThrottleDuration as been created to avoid being spammed by providers (for example if you kill 50 containers one shot on Docker). This is not a bug, but a feature 😄.
What can be done here is to decrease this property to 100ms for example.
Finally, a deepEquals check is made in listenConfigurations https://github.com/containous/traefik/blob/master/server.go#L178, I think it would be better to move it in listenProviders https://github.com/containous/traefik/blob/master/server.go#L144, before the Providers Throttle check. Then we would not even wait if configuration doesn't change and skip the event.
WDYT?

@errm
Copy link
Contributor

errm commented Jun 20, 2016

Yeah, thats how I think it should work . . .

@traefik traefik locked and limited conversation to collaborators Sep 1, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants