-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot map port 53 from container to host; conflicts with dnsmasq #13
Comments
To confirm, are you running Podman rootless or as root? |
@mheon As root. As far as I know to use the docker-compose features you need to be running as root. More information here: https://www.redhat.com/sysadmin/compose-kubernetes-podman |
@james-crowley can you provide the compose file in question or a simplified one? |
@baude It's in the issue. I simplified it down already. It is at the bottom.
|
ok, i can replicate this, i will need to dive in deeper. thanks for the issue and ill update here as soon as I can |
@baude Let me know if you need any help. Happy to lend a hand. I got access to |
the socat logs from this operation show this:
which is odd ... so it seems that podman is doing exactly what it's been told. map 3233 to a random port. so now, we should focus on why |
the collision on port 53 is due to dnsmasq running on the network created by compose. what happens if you temporarily rename the cni plugin called dnsname? on my system that is /usr/libexec/cni/dnsname |
@baude Same path on my system. Renaming The problem comes now you are not able to use the internal DNS to resolve container names. The Thus I end up with the errors:
Where It seems like podman is implementing a different solution to DNS resolution between the containers? As this conflict does not happen with docker. Is there away to tune/configure the cni plugin to not launch the service on the same network compose creates? |
yeah there is a way but not through compose. and i assume you want container to container name resolution. i have some ideas on this to discuss with the team and see if we can come up with a solution. |
@baude Yeah, having container to container resolution what I am after. Having the docker-compose with podman behave the same or as close as possible to docker-compose with docker would be great. |
A friendly reminder that this issue had no activity for 30 days. |
@baude Any updates or testing help needed? |
can replicate with pi-hole in docker-compose |
A friendly reminder that this issue had no activity for 30 days. |
we should deal with this issue of dns ports in podman 4.0 |
Probably need to have a discussion about how... |
A friendly reminder that this issue had no activity for 30 days. |
this will be fixed in 4.0 with aardvark. |
I am not sure if this will be fixed for 4.0, @mheon right? |
if not, we should move this issue there |
@james-crowley: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This will not be 4.0 unless we get very ambitious. |
Do we have a card for the free-up-53 work in Aardvark? |
no |
deal breaker for me. |
What's wrong with adding an iptables rule like the one suggested by #13 (comment) -- assuming we also filter by source IP or interface ? Running manually with the following
seems to work as expected, so it doesn't look too bad? (FWIW I'm now struggling after containers/podman#14412 as I added a dnsmasq so updating nameservers by changing network would work, fell into this same problem, made dnsmasq run with bind-interfaces so it wouldn't conflict, but now running into more problems with that dnsmasq not starting up reliably when adding more interfaces into the mix so I started looking into this just now instead -- feels much less hacky to free up the 53 port here instead of pouncing on dnsmasq some more) @f1yn |
Solution that worked for me was bind the DNS from the container to the IP of the machine directly. Then it doesn’t conflict with aavark-dns. But in resolv.conf, you need to use machine’s ip, not 127.0.0.1 |
Yes, that's what I've just had a look at the netavark code that spawns aardvark-dns (this issue probably ought to move there?) and there's a bit of plumbing to do since we don't currently have in PREROUTING, but it doesn't look too bad. I don't have time to do it immediately but if it's not done by mid-July I'll probably be able to justify spending some time on it... Which is by no mean an invitation to wait for me, but at least I'll try! |
@martinetd In order to access my my DNS server consistently over both my LAN, but also remotely via WireGuard - I gave up having the DNS port remapped and instead moved all of my DNS infra to a bridged VM. Now my DNS will show up with a dedicated IP address on both my LAN (which can be accessed by all of my other containers, my host machine, and any machines connected to my VPN). I hate doing this. It uses way more resources than reasonable, and as a result I ended up completely replacing the hardware my services are running on - having to use a processor that supports bridging directly to the network interfaces (I needed hardware that specifically used VT-d, otherwise I'd be relying on having to manually route packets - AGAIN). Maybe there were legitimate reasons locking up port 53 was necessary, but under most circumstances this port should be treated as a system port, and shouldn't ever be remapped by userland software unless it's documented to do so? @dada513 could you be a little more specific about your use case and what your workaround actually looks like?
|
Yes, I am using hostnames for DNS resolution. For example in caddy, my reverse proxy, I point to my nextcloud container using its name. I am binding via podman. What I do is use the IP address of the machine directly in port bindings for DNS, like: I am too running a wireguard VPN, but I had to set the dns for peers to my machine's public ip |
I had no idea that one could use the full IPv4 as a prefix when describing bindings like that 🤯 I'll give this a try. |
It works*, but I'm skill a little skeptical because I'm noticing that new containers that are added to the shared network seem to have issues resolving on my * I'm going to keep monitoring my systems to see if more DNS issues crop up. For now, this seems to solve the issue of putting podman in a completely unusable state like my older comment pointed out. |
hardcode port 1153 and assume aardvark-dns is always started for now Fixes: containers/aardvark-dns#13
hardcode port 1153 and assume aardvark-dns is always started for now Signed-off-by: Dominique Martinet <[email protected]> Fixes: containers/aardvark-dns#13
I've opened a proof of concept PR in containers/netavark#323 to just use another port -- if the approach is ok for maintainers I'll finish the patch over the next few days. |
hardcode port 1153 and assume aardvark-dns is always started for now Signed-off-by: Dominique Martinet <[email protected]> Fixes: containers/aardvark-dns#13
hardcode port 1153 and assume aardvark-dns is always started for now Signed-off-by: Dominique Martinet <[email protected]> Fixes: containers/aardvark-dns#13
hardcode port 1153 and assume aardvark-dns is always started for now Signed-off-by: Dominique Martinet <[email protected]> Fixes: containers/aardvark-dns#13
Well hardcoding to another port does not really solve this. We need to pick a random free port and store it somewhere. |
Yes, I've addressed that in the PR -- that's why it's a draft:
I'm not spending time to do boring stuff like adding a config item if you don't like the idea in the first place. |
The problem with picking a port at random is that we can't know if it's free, and we need to use it in netavark so we can't just let aardvark-dns try to find one without adding some communication with it (and just retrying to execute aardvark-dns with ports until it works lacks a criteria to decide if it worked, or if it failed because of port binding, or if it failed for something else so we're back to square one) Adding a config item would allow users which bind something on whatever port we say is default to change it if they want to which would improve the situation considerably from where we are now -- default could actually be 53 if that's what you want and we wouldn't need the iptables rule then, or some arbitrary > 1024 port. |
Yeah you are right, picking random is not simple at all when trying to fix it properly. We would need to bind with port 0 in aardvark and then somehow communicate the port back. This would also mean that we likely have different ports for each network. Maybe adding a config option to containers.conf is the best, this would allow user to have a predictable port when they want to use aardvark for other purposes. |
hardcode port 1153 and assume aardvark-dns is always started for now Signed-off-by: Dominique Martinet <[email protected]> Fixes: containers/aardvark-dns#13
hardcode port 1153 and assume aardvark-dns is always started for now Signed-off-by: Dominique Martinet <[email protected]> Fixes: containers/aardvark-dns#13
hardcode port 1153 and assume aardvark-dns is always started for now Signed-off-by: Dominique Martinet <[email protected]> Fixes: containers/aardvark-dns#13
hardcode port 1153 and assume aardvark-dns is always started for now Signed-off-by: Dominique Martinet <[email protected]> Fixes: containers/aardvark-dns#13
hardcode port 1153 and assume aardvark-dns is always started for now Signed-off-by: Dominique Martinet <[email protected]> Fixes: containers/aardvark-dns#13
hardcode port 1153 and assume aardvark-dns is always started for now Signed-off-by: Dominique Martinet <[email protected]> Fixes: containers/aardvark-dns#13
hardcode port 1153 and assume aardvark-dns is always started for now Signed-off-by: Dominique Martinet <[email protected]> Fixes: containers/aardvark-dns#13
- check NETAVARK_DNS_PORT env var for a different port setup - if set and not 53, setup port forwarding rules for that port and start aardvark appropriately Note: this requires containers/common#1084 to be usable by podman because just setting the env var manually will lose it for teardown, leading to the port forwading rule never being removed. Signed-off-by: Dominique Martinet <[email protected]> Fixes: containers/aardvark-dns#13
Is this a BUG REPORT or FEATURE REQUEST?
/kind bug
Description
When using
docker-compose
and podman, podman fails to bring up containers trying to port map ports below 60. Additionally, when trying to map port53
on the host, it seems to conflict withdnsmasq
process podman spawns.Steps to reproduce the issue:
Parsing Error
Install podman 3.0 as root to utilize docker-compose features
Make sure to disable any dns(port 53) service running on OS
Using the
docker-compose.yml
file below issue:docker-compose up
Port 53 Conflict
Install podman 3.0 as root to utilize docker-compose features
Make sure to disable any dns(port 53) service running on OS
Edit the
docker-compose.yml
file and change- 53:53
to- 53:XXXX
, where XXXX is anything above 59.Example:
- 53:60
Then issue the following:
docker-compose up
Describe the results you received:
Using the unmodified
docker-compose.yml
file below will generate the parsing error:From my testing if I change the port mapping,
- 53:53
to be anything above 59 for the container port, it passes the parsing error.Changing the port mapping to
- 53:60
, allows thedocker-compose up
to continue but fail with this error message:Just to make sure I am not crazy, I bring down the containers,
docker-compose down
. Then check my ports usingsudo lsof -i -P -n
. Which results in:Please note
X.X.X.X
is just me censoring my IPs. As you can see I do not have any services listen on port53
.Next I issue
docker-compose up
again. I see the same port conflict issue. Then issuesudo lsof -i -P -n
to check my services before bringing down the containers.As you can see podman has spawned a
dnsmasq
process. I think this is to allow DNS between the containers, but seems to conflict if you want to run/port map port53
.Describe the results you expected:
I expect not to hit that parsing error. I am not sure why podman/docker-compose is hitting that error. When running that exact same
docker-compose.yml
via docker I have no issues.I also expect not to hit port 53 conflicts. I am not sure how podman is handling DNS between the containers but the implementation limits users ability to hosts different services.
Additional information you deem important (e.g. issue happens only occasionally):
N/A
Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?
Yes
Additional environment details (AWS, VirtualBox, physical, etc.):
Running on
amd64
hardware. The server is a VM inside of VMware. Also running on Ubuntu 20.04.docker-compose.yml
The text was updated successfully, but these errors were encountered: