Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow docker to decide where to bind port #2138

Closed
wants to merge 1 commit into from

Conversation

sheerun
Copy link
Contributor

@sheerun sheerun commented Dec 22, 2016

This fixes following issue:

I decided to run nomad inside docker container, so I could use weave net for inter-nomad communication. At the same time I wanted nomad to be able to schedule containers directly on the host each agent is running on. This seems like chicken-egg issue, but can be easily solved by privileging nomad process, and binding docker socket inside the nomad container.

I've configured nomad client to bind to eth1 interface (with network_interface = "eth1") that happens to be a bridge interface that docker expects contained processes to bind ports to for exposing ports. It's IP address is 172.18.0.3.

Nomad properly attaches to this address when using fork/exec driver, but it does something strange when running docker driver, an equivalent of executing container with --publish 172.18.0.3:80:80, but 172.18.0.3 is the bridge address for container in which nomad is running, not of the eth0 interface on the host that docker is configured to bind published ports to. This results in nomad being unable to schedule such job.

This change leaves decision to which port to bind for docker, what results in a command equivalent of --publish 80:80 when running job in docker container.

This change can also potentially fix #1187, but I didn't test it.

@sheerun
Copy link
Contributor Author

sheerun commented Dec 22, 2016

@narqo Could you maybe confirm?

@sheerun
Copy link
Contributor Author

sheerun commented Dec 22, 2016

btw. What's up with failing builds, even on master?

@dadgar
Copy link
Contributor

dadgar commented Jan 3, 2017

@sheerun Hey this change can not be merged for two reasons:

  1. It breaks the behavior of setting network_interface in the client configuration.
  2. In the future multiple IPs will be supported and in order for the scheduler to do correct port accounting, the drivers must use the correct IP:Port pairs that are allocated.

I would attach the eth0 interface to the container as well and just configure nomad to use that interface. http://stackoverflow.com/a/39393229

@dadgar dadgar closed this Jan 3, 2017
@sheerun
Copy link
Contributor Author

sheerun commented Jan 3, 2017

@dadgar Such thing isn't possible. You can attach only bridge network that has different ip range than eth0.. This renders non-host-network docker containers not possible to execute with nomad..

@dadgar
Copy link
Contributor

dadgar commented Jan 3, 2017

@sheerun Couldn't you create the network and run with host network mode? Then you set Nomad to bind to your weave network interface and set the client network_interface to the host interface?

@sheerun
Copy link
Contributor Author

sheerun commented Jan 3, 2017

As for running in host mode, you cannot both use weave and run docker in host mode. Weave works as CNI plugin for docker and you can use weave only when binding to it with --net weave, and this way only bridge network and weave network are available inside container.

Also, I'm unaware how to configure nomad to bind to some interface while network_interface is set to host interface. As far as I see network_interface is used for both. There isn't configuration like docker_interface.

@sheerun
Copy link
Contributor Author

sheerun commented Jan 3, 2017

Let me put it other way: In this issue I don't want to bind to weave network, but rather bridge network, so I can publish the port on the same host. When I set network_interface to host network, nomad doesn't see it inside the container, what leads to error. And I cannot use host network mode because weave won't work with it (you cannot combine host networking with overlay networking).

@sheerun
Copy link
Contributor Author

sheerun commented Jan 3, 2017

Please also remember that I'm running nomad inside container, and I want to run it on weave network so nomad nodes can communicate with each other securely. As I mentioned I cannot at the same time bind eth0, as host networking doesn't mix with overlay networking. That's the reason why nomad cannot see network_interface when I set it to eth0: it has only access to bridge network and weave network.

@dadgar
Copy link
Contributor

dadgar commented Jan 3, 2017

@sheerun Unfortunately it looks like this use case may not be supported then. You essentially want a Nomad to run in a container with access to the host interfaces and a weave interface which you are saying is impossible with Docker. Nomad currently needs to be able to fingerprint the network interface in order to use it. Maybe down the line custom interfaces can be specified which would solve this.

As to the original use case, why do you need weave to secure Nomad communication? I would just use TLS

@sheerun
Copy link
Contributor Author

sheerun commented Jan 3, 2017

Because it doesn't support authentication for all three of RPC, HTTP and gossip, and I want nomad nodes to communicate through the public internet.

@dadgar
Copy link
Contributor

dadgar commented Jan 3, 2017

I would just use the address block to put RPC/Serf on public and HTTP on private network. With TLS your certificate is auth

@sheerun
Copy link
Contributor Author

sheerun commented Jan 3, 2017

It is not, as mentioned in #2136, one can simply use --insecure in curl

IP filtering is nice idea, but it's hard to orchestrate and maintain. And in my case it's especially cumbersome as I don't know what addresses clients will connect from.

@dadgar
Copy link
Contributor

dadgar commented Jan 3, 2017

My suggestion assumes you trust the local network but I guess that is not the case. I hope you can find Nomad useful till ACLs come 👍

@github-actions
Copy link

I'm going to lock this pull request because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active contributions.
If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Apr 12, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
2 participants