-
Notifications
You must be signed in to change notification settings - Fork 39.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Connecting containers #494
Comments
I'm just going to suggest some other options too. You can also run code inside the container, you might have 2 options: You could try spending time to make 2 really great libraries that do service discovery and loadbalancing with bindings in many languages and let the software running inside the container use that. An other option is that because many already use a supervisor process, why not inject a supervisor process into the container that can handles things like:
Configuration of the supervisor process can be through the environment variables or service discovery or both. Update: what if that supervisor process could also request Docker to create iptables or other rules for 127.0.0.1 to make the direct connections endpoint IP dynamic again. |
Has anyone thought of assigning the same IP-address to the same service on many hosts and use Equal-Cost Multi-Path routing with a metric and/or hashing to route the traffic to the different hosts (the gateway for the route is the host) ? This does assume a flat network though. Update: flat unless you use tunnels between the hosts. |
I'll add a note about libraries inside the container. That's kind of the docker links v2 approach, or the fabric approach (for Java micro services). One thing I don't like about injecting a supervisor is that you lose info outside the container (about restarts) - users probably want to know that a service is "flapping", but a service process hides that. Sometimes that's unavoidable, but as a general case there seems to be some preference to exposing static metadata about your container that lets the container define it enough for general purpose consumers to use. Some of our guys have tunnels being prototyped - worth noting. |
Obviously, if you inject a supervisor which is owned by the infrastructure it can report restarts/flapping. |
Yeah - that's one of ideas floated for dockers links v2 - the challenge is that you then get in the business of process control. It would be interesting to explore Foreman and systemd style plugins that could handle this. There are some folks working on systemd-container, a set of changes that make systemd able to do userspace management more easily - I'll mention this to them |
It kind of reminds me of Pacemaker Remote too: Maybe just drop a unix domain socket in the container for communication ? |
@Lennie I agree with @smarterclayton on the downsides of running a supervisor inside the container, which are discussed in the pod documentation: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/pods.md . I don't think an in-container supervisor is necessary, though. Lifecycle hooks (#140) could be used for custom registration logic, though I think we should provide a solution that works out of the box. FWIW, there is some discussion of this issue in the networking doc: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/networking.md . I dislike environment-variable-based approaches to addressing (including Docker links and the current approach used by the service proxy), due to the lack of updatability, the resulting start order dependencies, scalability challenges, lack of true namespacing and hierarchy, etc. Imagine running a web browser in a container, with web sites running in other containers. Breaking down this problem a bit:
|
OK, so a system with multiple methods ? Not just one. Yeah, I can see that. It is probably best. Maybe there a just to many situations you can easily capture in one solution. When live-migrating, I guess if DDNS TTL issues are a problem, in theory you could set up a iptables-rules on the old host/old IP pointing to the new host/new IP. For a very short time, if you really have to. The best way is to harden the simple clients though. They need to be to handle a lost connection and re-connect. Now they'll have to do 2 things: do DNS lookup, connect to the first IP (or connect to DNS name if that is in the higher level API). And those that can't, can still use a proxy. I've also had an other thoughts before: is it possible to represent group lookup as a DNS-lookup somehow ? If the only thing you are returning is pod IP-addresses, a DNS lookup could be possible. Maybe something like the following, where the order of the labels in the DNS-query didn't matter: Has anyone considered that yet ? That would restrict the characters for the labels and values to the ones allowed in DNS of course, maybe that is already the case I haven't checked. Unless every resolver library includes punycode support, which I doubt. |
I should add: you should make losely coupled components. So every application that needs it should just include a proxy in their own pod. |
Use shared_cpu_map instead of shared_cpu_list to get cache hierarchy.
Obsolete |
update logging elasticsearch version
update logging elasticsearch version
…tplog-status Bug 1903999: Httplog response code is always zero
I wanted to lay out the general problem of interconnecting containers in terms of use cases and requirements and map out some of the solutions that matter at different scales, to help start a discussion about what additional constructs would be useful in Kubernetes.
Problem statement:
Ways of identifying the address of a remote service to software in a container:
Ways of allowing the address of a remote service to change over time
Docker links (current and future)
Docker linking currently injects environment variables of a known form into a container, representing links defined on the host. The next iteration of Docker links will most likely implement local service discovery (a discovery endpoint injected into a container) via the definenvironition of links on the host, with the existence of a proxy on that host connecting to outbound servers. It will also likely support adapters for exposing environment variables, dynamic config files, or a static cluster file.
Observations:
The text was updated successfully, but these errors were encountered: