Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to access UDP port 53 exposed by container from network #14365

Closed
tmds opened this issue May 25, 2022 · 27 comments
Closed

Unable to access UDP port 53 exposed by container from network #14365

tmds opened this issue May 25, 2022 · 27 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. network Networking related issue or feature

Comments

@tmds
Copy link
Contributor

tmds commented May 25, 2022

I currently have pihole running with docker on Debian, and I'm trying to get it to work on Fedora Server 36 with podman.

To be able to use port 53, I set DNSStubListener=no in /etc/systemd/resolved.conf

I've started the container using podman as the root user.

# podman ps -a
CONTAINER ID  IMAGE                           COMMAND               CREATED         STATUS                      PORTS                                                                             NAMES
e92da0e3e69b  docker.io/pihole/pihole:latest                        9 minutes ago   Up 9 minutes ago (healthy)  0.0.0.0:53->53/udp, 0.0.0.0:67->67/udp, 0.0.0.0:53->53/tcp, 0.0.0.0:8090->80/tcp  pihole

I can use the DNS server using 127.0.0.1:

# nslookup www.google.com 127.0.0.1
Server:		127.0.0.1
Address:	127.0.0.1#53

Non-authoritative answer:
Name:	www.google.com
Address: 142.250.179.164
Name:	www.google.com
Address: 2a00:1450:400e:80c::2004

However, it doesn't work from the network. And, even on the machine itself it doesn't work when using the interface ip address:

# nslookup www.google.com 192.168.1.237
;; connection timed out; no servers could be reached

I tried disabling SELinux, and firewalld, but that doesn't make a difference.

# firewall-cmd --state
not running
# getenforce
Disabled

The UDP port (53) can only be reach from localhost.
The TCP ports can be reached from the network.

cc @mheon @rhatdan

@mheon
Copy link
Member

mheon commented May 25, 2022

By "from the network" do you mean other hosts on the same physical network, and not other containers on a Podman network, correct?

If so, I'd assume that you have firewall rules blocking UDP traffic from reaching the host running Podman on that port. Firewalld being off doesn't prevent that, they could be plain iptables. Can you provide the output of iptables -nvL?

@tmds
Copy link
Contributor Author

tmds commented May 25, 2022

By "from the network" do you mean other hosts on the same physical network, and not other containers on a Podman network, correct?

Yes.

Can you provide the output of iptables -nvL?

# iptables -nvL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
 3945 1190K NETAVARK_FORWARD  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* netavark firewall plugin rules */

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain NETAVARK_FORWARD (1 references)
 pkts bytes target     prot opt in     out     source               destination         
 1552 1052K ACCEPT     all  --  *      *       0.0.0.0/0            10.88.0.0/16         ctstate RELATED,ESTABLISHED
 1756 96827 ACCEPT     all  --  *      *       10.88.0.0/16         0.0.0.0/0           

@kontza
Copy link

kontza commented May 26, 2022

Hi,

I'm battling with a similar problem, only my new OS is openSUSE MicroOS.

I've been using netcat as a low-level connection tester. I've set it up to listen to either UDP or TCP traffic and in another terminal another instance of netcat is used to connect to the first. You could try that to see at low-level if there is any traffic.

@tmds
Copy link
Contributor Author

tmds commented May 26, 2022

I've been using netcat as a low-level connection tester. I've set it up to listen to either UDP or TCP traffic and in another terminal another instance of netcat is used to connect to the first. You could try that to see at low-level if there is any traffic.

If I run nc on the system and listen on UDP 53, I See recvmsg calls for DNS queries coming in.

I noticed these in the pihole log.

cap[cap_net_raw] not permitted
cap[cap_net_admin] not permitted
cap[cap_sys_nice] not permitted

I forgot to add these caps. Adding them didn't solve the issue though.

I'm now starting pihole as:

podman run --rm -p 53:53/udp -p 53:53/tcp -p 67:67/udp -p 8090:80/tcp --cap-add CAP_NET_RAW --cap-add CAP_NET_ADMIN --cap-add CAP_SYS_NICE docker.io/pihole/pihole

netstat shows the udp ports being bound to the any address.

# netstat -lunp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
udp        0      0 0.0.0.0:53              0.0.0.0:*                           69742/conmon        
udp        0      0 0.0.0.0:67              0.0.0.0:*                           69742/conmon        

I don't understand why using 127.0.0.1 works, and 192.168.1.237 does not.

When I strace the PID that shows up in netstat I don't see any activity, even for the queries that succeed.

@rhatdan
Copy link
Member

rhatdan commented May 26, 2022

@Luap99 @mheon @flouthoc This feels a lot like ardvark-dns holding onto port 53 causing issues.

@mheon
Copy link
Member

mheon commented May 26, 2022

Aardvark isn't running, otherwise Conmon wouldn't have been able to bind to 0.0.0.0:53.

Can you also provide iptables -t -nat -nvL? I'm beginning to suspect port forwarding itself is the issue.

@tmds
Copy link
Contributor Author

tmds commented May 26, 2022

# iptables -t -nat -nvL
iptables v1.8.7 (nf_tables): table '-nat' does not exist
Perhaps iptables or your kernel needs to be upgraded.

My Fedora VM runs kernel v5.17.5.

@mheon
Copy link
Member

mheon commented May 26, 2022

Oh, sorry, typo on my part - iptables -t nat -nvL (no - before nat)

@tmds
Copy link
Contributor Author

tmds commented May 30, 2022

# iptables -t nat -nvL
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
 1799  108K NETAVARK-HOSTPORT-DNAT  all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    1    84 NETAVARK-HOSTPORT-DNAT  all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
 4797  317K NETAVARK-HOSTPORT-MASQ  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
 1616 97383 NETAVARK-1D8721804F16F  all  --  *      *       10.88.0.0/16         0.0.0.0/0           

Chain NETAVARK-1D8721804F16F (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            10.88.0.0/16        
 1616 97383 MASQUERADE  all  --  *      *       0.0.0.0/0           !224.0.0.0/4         

Chain NETAVARK-DN-1D8721804F16F (9 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 NETAVARK-HOSTPORT-SETMARK  tcp  --  *      *       10.88.0.0/16         0.0.0.0/0            tcp dpt:2022
    0     0 NETAVARK-HOSTPORT-SETMARK  tcp  --  *      *       127.0.0.1            0.0.0.0/0            tcp dpt:2022
    3   180 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:2022 to:10.88.0.6:22
 1206 72360 NETAVARK-HOSTPORT-SETMARK  tcp  --  *      *       10.88.0.0/16         0.0.0.0/0            tcp dpt:8082
    0     0 NETAVARK-HOSTPORT-SETMARK  tcp  --  *      *       127.0.0.1            0.0.0.0/0            tcp dpt:8082
 1206 72360 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8082 to:10.88.0.6:3000
  132  7920 NETAVARK-HOSTPORT-SETMARK  tcp  --  *      *       10.88.0.0/16         0.0.0.0/0            tcp dpts:80:81
    0     0 NETAVARK-HOSTPORT-SETMARK  tcp  --  *      *       127.0.0.1            0.0.0.0/0            tcp dpts:80:81
  316 18960 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpts:80:81 to:10.88.0.9:80-81/80
    0     0 NETAVARK-HOSTPORT-SETMARK  tcp  --  *      *       10.88.0.0/16         0.0.0.0/0            tcp dpt:443
    0     0 NETAVARK-HOSTPORT-SETMARK  tcp  --  *      *       127.0.0.1            0.0.0.0/0            tcp dpt:443
   64  3840 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:443 to:10.88.0.9:443
    0     0 NETAVARK-HOSTPORT-SETMARK  tcp  --  *      *       10.88.0.0/16         0.0.0.0/0            tcp dpt:53
    0     0 NETAVARK-HOSTPORT-SETMARK  tcp  --  *      *       127.0.0.1            0.0.0.0/0            tcp dpt:53
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:53 to:10.88.0.10:53
    0     0 NETAVARK-HOSTPORT-SETMARK  tcp  --  *      *       10.88.0.0/16         0.0.0.0/0            tcp dpt:8090
    0     0 NETAVARK-HOSTPORT-SETMARK  tcp  --  *      *       127.0.0.1            0.0.0.0/0            tcp dpt:8090
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8090 to:10.88.0.10:80
    0     0 NETAVARK-HOSTPORT-SETMARK  udp  --  *      *       10.88.0.0/16         0.0.0.0/0            udp dpt:53
    0     0 NETAVARK-HOSTPORT-SETMARK  udp  --  *      *       127.0.0.1            0.0.0.0/0            udp dpt:53
    0     0 DNAT       udp  --  *      *       0.0.0.0/0            0.0.0.0/0            udp dpt:53 to:10.88.0.10:53
    0     0 NETAVARK-HOSTPORT-SETMARK  udp  --  *      *       10.88.0.0/16         0.0.0.0/0            udp dpt:67
    0     0 NETAVARK-HOSTPORT-SETMARK  udp  --  *      *       127.0.0.1            0.0.0.0/0            udp dpt:67
    0     0 DNAT       udp  --  *      *       0.0.0.0/0            0.0.0.0/0            udp dpt:67 to:10.88.0.10:67
   11   660 NETAVARK-HOSTPORT-SETMARK  tcp  --  *      *       10.88.0.0/16         0.0.0.0/0            tcp dpt:8001
    0     0 NETAVARK-HOSTPORT-SETMARK  tcp  --  *      *       127.0.0.1            0.0.0.0/0            tcp dpt:8001
   11   660 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8001 to:10.88.0.11:5000

Chain NETAVARK-HOSTPORT-DNAT (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    3   180 NETAVARK-DN-1D8721804F16F  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:2022 /* dnat name: podman id: d86bb02c970c0bea527beef6d9077d7a6ee12e375c47856c7d2d6740eb4084d6 */
 1206 72360 NETAVARK-DN-1D8721804F16F  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8082 /* dnat name: podman id: d86bb02c970c0bea527beef6d9077d7a6ee12e375c47856c7d2d6740eb4084d6 */
  316 18960 NETAVARK-DN-1D8721804F16F  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpts:80:81 /* dnat name: podman id: 6f504a3214e7d4cf2ff00d173c3f5b04cf108c8d09e469fdebb8226eaa1733c2 */
   64  3840 NETAVARK-DN-1D8721804F16F  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:443 /* dnat name: podman id: 6f504a3214e7d4cf2ff00d173c3f5b04cf108c8d09e469fdebb8226eaa1733c2 */
    0     0 NETAVARK-DN-1D8721804F16F  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:53 /* dnat name: podman id: ce14d7e9bd0c9a11609b183551ad71752c592fb9eb178b844c44c5285be7acce */
    0     0 NETAVARK-DN-1D8721804F16F  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8090 /* dnat name: podman id: ce14d7e9bd0c9a11609b183551ad71752c592fb9eb178b844c44c5285be7acce */
    0     0 NETAVARK-DN-1D8721804F16F  udp  --  *      *       0.0.0.0/0            0.0.0.0/0            udp dpt:53 /* dnat name: podman id: ce14d7e9bd0c9a11609b183551ad71752c592fb9eb178b844c44c5285be7acce */
    0     0 NETAVARK-DN-1D8721804F16F  udp  --  *      *       0.0.0.0/0            0.0.0.0/0            udp dpt:67 /* dnat name: podman id: ce14d7e9bd0c9a11609b183551ad71752c592fb9eb178b844c44c5285be7acce */
   11   660 NETAVARK-DN-1D8721804F16F  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8001 /* dnat name: podman id: a4c9e82487307a8fbb85f23140e673175e154340b89dc4e7e4cd3d71d6a7b41f */

Chain NETAVARK-HOSTPORT-MASQ (1 references)
 pkts bytes target     prot opt in     out     source               destination         
 1349 80940 MASQUERADE  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* netavark portfw masq mark */ mark match 0x2000/0x2000

Chain NETAVARK-HOSTPORT-SETMARK (18 references)
 pkts bytes target     prot opt in     out     source               destination         
 1349 80940 MARK       all  --  *      *       0.0.0.0/0            0.0.0.0/0            MARK or 0x2000

@tmds
Copy link
Contributor Author

tmds commented May 31, 2022

@mheon do you see something useful?

You should be able to reproduce the issue if you stop systemd resolved from using the port as indicated in the top comment, and using the podman run command from #14365 (comment).

@mheon
Copy link
Member

mheon commented May 31, 2022

    0     0 NETAVARK-HOSTPORT-SETMARK  udp  --  *      *       10.88.0.0/16         0.0.0.0/0            udp dpt:53
    0     0 NETAVARK-HOSTPORT-SETMARK  udp  --  *      *       127.0.0.1            0.0.0.0/0            udp dpt:53
    0     0 DNAT       udp  --  *      *       0.0.0.0/0            0.0.0.0/0            udp dpt:53 to:10.88.0.10:53

0 packets matched the for our UDP/53 rules. Same in the hostport-dnat chain. The UDP rules are not matching, so no NAT is happening.

Any chance you can run a DNS server on the local system without the Podman stack (maybe run the container with --net=host?) and verify if traffic flows? The Netavark rules look correct, aside from traffic not reaching them.

@Luap99 Thoughts?

@Luap99
Copy link
Member

Luap99 commented May 31, 2022

I have no idea. I agree that the iptables rules look correct.

Can you check with tcpdump where the traffic is going?

@kontza
Copy link

kontza commented May 31, 2022

Do you have host directories mapped into the container as volumes? My connection problems were finally solved when I found out, that if I don't map a local directory as a volume into the container, Pi-hole works perfectly. If I uncomment those two volume definitions Pi-hole does not work as a nameserver, I can access web UI, but when volumes are commented out, everything works as with my previous Docker setup.

podman create \                               
        --cap-add CAP_NET_RAW \               
        --cap-add CAP_NET_ADMIN \             
        --cap-add CAP_SYS_NICE \              
        --env TZ='Europe/Helsinki' \          
        --env WEBPASSWORD='e_inutile' \       
        --replace \                           
        --name pihole \                       
        docker.io/pihole/pihole               
#       --volume ./pihole/:/etc/pihole:z \    
#       --volume ./dnsmasq/:/etc/dnsmasq.d:z \

@mheon
Copy link
Member

mheon commented May 31, 2022

That makes very little sense. Volume mounting and networking are entirely distinct. Are you sure that it's not something to do with the config files you're mounting into the container?

@kontza
Copy link

kontza commented May 31, 2022

That makes very little sense. Volume mounting and networking are entirely distinct. Are you sure that it's not something to do with the config files you're mounting into the container?

Thanks! I was so sure that mounting volumes broke something that I never checked the state of my host directory. It was empty, and by accident I had deleted the Ansible task responsible for creating that directory and its contents :)

This is what you get when you try to do these things tired.

@tmds
Copy link
Contributor Author

tmds commented May 31, 2022

0 packets matched the for our UDP/53 rules. Same in the hostport-dnat chain. The UDP rules are not matching, so no NAT is happening.

initial:

    0     0 NETAVARK-HOSTPORT-SETMARK  udp  --  *      *       10.88.0.0/16         0.0.0.0/0            udp dpt:53
    6   360 NETAVARK-HOSTPORT-SETMARK  udp  --  *      *       127.0.0.1            0.0.0.0/0            udp dpt:53
    9   540 DNAT       udp  --  *      *       0.0.0.0/0            0.0.0.0/0            udp dpt:53 to:10.88.0.12:53
    9   540 NETAVARK-DN-1D8721804F16F  udp  --  *      *       0.0.0.0/0            0.0.0.0/0            udp dpt:53 /* dnat name: podman id: 049e3ead6dc752de7c48e355b84b7bc50131cc6367e2ea9d90211855866d4b59 */

After working query: nslookup www.google.com 127.0.0.1

    0     0 NETAVARK-HOSTPORT-SETMARK  udp  --  *      *       10.88.0.0/16         0.0.0.0/0            udp dpt:53
    8   480 NETAVARK-HOSTPORT-SETMARK  udp  --  *      *       127.0.0.1            0.0.0.0/0            udp dpt:53
   11   660 DNAT       udp  --  *      *       0.0.0.0/0            0.0.0.0/0            udp dpt:53 to:10.88.0.12:53
   11   660 NETAVARK-DN-1D8721804F16F  udp  --  *      *       0.0.0.0/0            0.0.0.0/0            udp dpt:53 /* dnat name: podman id: 049e3ead6dc752de7c48e355b84b7bc50131cc6367e2ea9d90211855866d4b59 */

And now a query that times out: nslookup www.google.com 192.168.1.237

    0     0 NETAVARK-HOSTPORT-SETMARK  udp  --  *      *       10.88.0.0/16         0.0.0.0/0            udp dpt:53
    8   480 NETAVARK-HOSTPORT-SETMARK  udp  --  *      *       127.0.0.1            0.0.0.0/0            udp dpt:53
   12   720 DNAT       udp  --  *      *       0.0.0.0/0            0.0.0.0/0            udp dpt:53 to:10.88.0.12:53
   12   720 NETAVARK-DN-1D8721804F16F  udp  --  *      *       0.0.0.0/0            0.0.0.0/0            udp dpt:53 /* dnat name: podman id: 049e3ead6dc752de7c48e355b84b7bc50131cc6367e2ea9d90211855866d4b59 */

So the pkts does increment for some lines on the failed query.

Any chance you can run a DNS server on the local system without the Podman stack (maybe run the container with --net=host?) and verify if traffic flows?

Using --net=host it works.

Can you check with tcpdump where the traffic is going?

I'm not sure what options to use. I'm using: tcpdump -n -i any 'udp port 53'.

For a query that works (nslookup www.google.com 127.0.0.1) I see this:

19:55:35.988148 podman0 Out IP 10.88.0.1.57803 > 10.88.0.13.domain: 55997+ A? www.google.com. (32)
19:55:35.988160 vethc9a07db3 Out IP 10.88.0.1.57803 > 10.88.0.13.domain: 55997+ A? www.google.com. (32)
19:55:35.988585 vethc9a07db3 P   IP 10.88.0.13.domain > 10.88.0.1.57803: 55997 1/0/0 A 172.217.168.196 (48)
19:55:35.988585 podman0 In  IP 10.88.0.13.domain > 10.88.0.1.57803: 55997 1/0/0 A 172.217.168.196 (48)
19:55:35.990171 podman0 Out IP 10.88.0.1.48226 > 10.88.0.13.domain: 16800+ AAAA? www.google.com. (32)
19:55:35.990188 vethc9a07db3 Out IP 10.88.0.1.48226 > 10.88.0.13.domain: 16800+ AAAA? www.google.com. (32)
19:55:35.990643 vethc9a07db3 P   IP 10.88.0.13.domain > 10.88.0.1.48226: 16800 1/0/0 AAAA 2a00:1450:400e:80c::2004 (60)
19:55:35.990643 podman0 In  IP 10.88.0.13.domain > 10.88.0.1.48226: 16800 1/0/0 AAAA 2a00:1450:400e:80c::2004 (60)

For a query that times out (nslookup www.google.com 192.168.1.237), I get:

19:56:14.676757 podman0 Out IP 192.168.1.237.44751 > 10.88.0.13.domain: 30904+ A? www.google.com. (32)
19:56:14.676771 vethc9a07db3 Out IP 192.168.1.237.44751 > 10.88.0.13.domain: 30904+ A? www.google.com. (32)
19:56:19.675120 podman0 Out IP 192.168.1.237.44751 > 10.88.0.13.domain: 30904+ A? www.google.com. (32)
19:56:19.675138 vethc9a07db3 Out IP 192.168.1.237.44751 > 10.88.0.13.domain: 30904+ A? www.google.com. (32)
19:56:24.674844 podman0 Out IP 192.168.1.237.44751 > 10.88.0.13.domain: 30904+ A? www.google.com. (32)
19:56:24.674862 vethc9a07db3 Out IP 192.168.1.237.44751 > 10.88.0.13.domain: 30904+ A? www.google.com. (32)

We don't seem to get a reply (or it doesn't show up here).

@tmds
Copy link
Contributor Author

tmds commented Jun 20, 2022

@mheon did you see my last comment? Some values are incrementing on the rules.

@mheon
Copy link
Member

mheon commented Jun 21, 2022

Sorry, been caught up in other issues. I'll try and take a look today or tomorrow.

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented Jul 25, 2022

@mheon Any update?

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@finzzz
Copy link

finzzz commented Sep 5, 2022

I'm having exact same issue even when it's not port 53

@rhatdan
Copy link
Member

rhatdan commented Sep 6, 2022

@Luap99 PTAL

@Constantin1489
Copy link

Constantin1489 commented Mar 3, 2023

macOS has the same error. I can't bind.

sudo lsof -i:53
There is no output.

podman run -d --name pihole7 \
    -e WEBPASSWORD="11111" \
    -e DNS1=8.8.8.8 \
    -e DNS2=1.1.1.1 \
    -v pihole_pihole:/etc/pihole:Z \
    -v pihole_dnsmasq:/etc/dnsmasq.d:Z \
    -p 8889:80 \
    -p 53:53/tcp \
    -p 53:53/udp \
    -p 443:443 pihole/pihole:latest

Output is

Error: unable to start container "13f722fc3600bcf6ac50c595dc214fa74e0804b56efe69dcafb7bec67b528f61": cannot listen on the TCP port: listen tcp4 :53: bind: address already in use

@mheon
Copy link
Member

mheon commented Mar 3, 2023

OS X is going to be a separate issue, due to the involvement of podman machine and a VM. Please open a fresh issue.

@Luap99
Copy link
Member

Luap99 commented Oct 19, 2023

@tmds Is this still an issue?

@Luap99 Luap99 added kind/bug Categorizes issue or PR as related to a bug. network Networking related issue or feature labels Oct 19, 2023
@Luap99
Copy link
Member

Luap99 commented Nov 29, 2023

I am going to close this since I haven't heard back anything.

@Luap99 Luap99 closed this as completed Nov 29, 2023
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Feb 28, 2024
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Feb 28, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. network Networking related issue or feature
Projects
None yet
Development

No branches or pull requests

7 participants