Skip to content
This repository has been archived by the owner on Nov 16, 2022. It is now read-only.

Host dependencies are not resolved on K8S: service discovery broken #1017

Open
tnolet opened this issue Aug 23, 2017 · 1 comment
Open

Host dependencies are not resolved on K8S: service discovery broken #1017

tnolet opened this issue Aug 23, 2017 · 1 comment
Assignees

Comments

@tnolet
Copy link
Contributor

tnolet commented Aug 23, 2017

apparently, the hosts are not resolved in the dependency tree, ports are. This breaks any use of dependencies (i.e. app -> database).

image

A simple and quick way to reproduce is using this blueprint. The redis container does not do anything but you can see in the output that the host is not resolved.

name: simpleservice-test:1.0.0
clusters:
  simpleservice:
    services:
      name: simpleservice:1.0.0
      deployable: magneticio/simpleservice:1.0.0
      ports:
        web: 3000/http
      dependencies:
        redis: redis
      environment_variables:
        REDIS_HOST: $redis.host
        REDIS_PORT: $redis.ports.redis_port
  redis:
    services:
        name: redis
        deployable: redis:latest
        ports:
          redis_port: 6379/tcp
  • Running Azure ACS
  • Vamp 0.9.5
  • Kubernetes 1.6
@tnolet tnolet added the bug label Aug 23, 2017
@dragoslav dragoslav self-assigned this Dec 5, 2017
@dragoslav
Copy link
Contributor

dragoslav commented Dec 7, 2017

In short, default Vamp k8s implementation doesn't support single host per service - but it is not that simple.
Same example but with additional port:

name: simpleservice-test:1.0.0
clusters:
  simpleservice:
    services:
      name: simpleservice:1.0.0
      deployable: magneticio/simpleservice:1.0.0
      ports:
        web: 3000/http
      dependencies:
        redis: redis
      environment_variables:
        REDIS_PORT_1: $redis.host:$redis.ports.redis_port_1
        REDIS_PORT_2: $redis.host:$redis.ports.redis_port_2
  redis:
    services:
        name: redis
        deployable: redis:latest
        ports:
          redis_port_1: 6379/tcp
          redis_port_2: 6380/tcp

If you deploy this on DC/OS both hosts should be identical (but different ports), if you deploy it on Kubernetes not just ports but also hosts are different.

DC/OS: Vamp manages all network via HAProxy

K8s: Vamp manages only network at "cluster" level, e.g. A/B situation. Load-balancing between instances of the same service (i.e. version) is done by k8s load balancer. Having a single host per service does not make sense if you want to use k8s services (access is always via host:port).

This is default implementation and it is an option to implement full Vamp based management.
Advantages of using this "hybrid" mode:

  • k8s will manage changes in network due to restarts/migration of pods, and Vamp updates HAProxy configuration only if there is a new change on cluster level - e.g. service merge/deletion, change in gateway weight...
  • it is possible to access pods directly using k8s services, otherwise only via HAProxy

Advantage of using full Vamp management is that you may have a single host per service with different ports.

I don't think it is enough just to add this explanation in docs if it is not yet there.
Since meaning of host depends on how networking is done I would suggest the following:

  • removing top level deployment hosts (check JSON/YAML output)
  • something like $redis.host may be still supported but value depends on scheduler and Vamp support (not cross scheduler compatible) - or if it is not supported on current scheduler then blueprint could be invalid but this rises another problem - validity depends on scheduler
  • optionally adding something like $redis.host.redis_port to shorten $redis.host:$redis.ports.redis_port
  • option to "extract" host based on target port, e.g. $redis.ports.redis_port.host (ugly but just as an example). This is a case when separate host and port variables are expected.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants