-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow docker to decide where to bind port #2138
Conversation
@narqo Could you maybe confirm? |
btw. What's up with failing builds, even on master? |
@sheerun Hey this change can not be merged for two reasons:
I would attach the |
@dadgar Such thing isn't possible. You can attach only bridge network that has different ip range than eth0.. This renders non-host-network docker containers not possible to execute with nomad.. |
@sheerun Couldn't you create the network and run with host network mode? Then you set Nomad to bind to your weave network interface and set the client |
As for running in host mode, you cannot both use weave and run docker in host mode. Weave works as CNI plugin for docker and you can use weave only when binding to it with Also, I'm unaware how to configure nomad to bind to some interface while network_interface is set to host interface. As far as I see |
Let me put it other way: In this issue I don't want to bind to weave network, but rather bridge network, so I can publish the port on the same host. When I set |
Please also remember that I'm running nomad inside container, and I want to run it on weave network so nomad nodes can communicate with each other securely. As I mentioned I cannot at the same time bind |
@sheerun Unfortunately it looks like this use case may not be supported then. You essentially want a Nomad to run in a container with access to the host interfaces and a weave interface which you are saying is impossible with Docker. Nomad currently needs to be able to fingerprint the network interface in order to use it. Maybe down the line custom interfaces can be specified which would solve this. As to the original use case, why do you need weave to secure Nomad communication? I would just use TLS |
Because it doesn't support authentication for all three of RPC, HTTP and gossip, and I want nomad nodes to communicate through the public internet. |
I would just use the address block to put RPC/Serf on public and HTTP on private network. With TLS your certificate is auth |
It is not, as mentioned in #2136, one can simply use IP filtering is nice idea, but it's hard to orchestrate and maintain. And in my case it's especially cumbersome as I don't know what addresses clients will connect from. |
My suggestion assumes you trust the local network but I guess that is not the case. I hope you can find Nomad useful till ACLs come 👍 |
I'm going to lock this pull request because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active contributions. |
This fixes following issue:
I decided to run nomad inside docker container, so I could use weave net for inter-nomad communication. At the same time I wanted nomad to be able to schedule containers directly on the host each agent is running on. This seems like chicken-egg issue, but can be easily solved by privileging nomad process, and binding docker socket inside the nomad container.
I've configured nomad client to bind to eth1 interface (with
network_interface = "eth1"
) that happens to be a bridge interface that docker expects contained processes to bind ports to for exposing ports. It's IP address is172.18.0.3
.Nomad properly attaches to this address when using fork/exec driver, but it does something strange when running docker driver, an equivalent of executing container with
--publish 172.18.0.3:80:80
, but172.18.0.3
is the bridge address for container in which nomad is running, not of theeth0
interface on the host that docker is configured to bind published ports to. This results in nomad being unable to schedule such job.This change leaves decision to which port to bind for docker, what results in a command equivalent of
--publish 80:80
when running job in docker container.This change can also potentially fix #1187, but I didn't test it.