-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rootless podman 5.2 with pasta now publishes processes which only listen on 127.0.0.1 in the container #24045
Comments
@sbrivio-rh @dgibson PTAL |
If the process in the container listens on 127.0.0.1, it will only be accessible via 127.0.0.1 on the loopback interface, and not via the external interface:
and this is the expected behaviour for pasta, because it handles both the loopback path (same as the rootlesskit port forwarder) in the container as well as the non-loopback path (same as the slirp4netns port forwarder). So, if no address is given explicitly for But I see now that rootlessport and slirp4netns don't actually map host loopback traffic, so this is surely inconsistent. I thought that since rootlessport uses 127.0.0.1 and ::1 as source addresses, it would also map the connections that should have those as source addresses, but no, it binds only all the other ones. I guess we have four options:
What do you all think? |
There was CVE-2021-20199 about that as some application somehow trust localhost (even though this is not secure at all as all users can access localhost) so yeah since then we always make sure the source ip is not 127.0.0.1. Of course in that case it was bad because even remote connections appeared as 127.0.0.1. For pasta if only 127.0.0.1 on the host maps to 127.0.0.1 in the container then this is likely not a big deal.
It is not possible to change this, if the application bind 127.0.0.1 then there is simply no way to get packages there from another namespace AFAICT (well without a user space proxy). As we forward via firewall the packages always go to the eth0 address in the container. Overall I think it is far assumption that binding to 127.0.0.1 means no external connections should be made to that address and pasta breaks this assumption by allowing connections from the host namespace 127.0.0.1.
That doesn't seem reasonable to me. The original report mentions that this used to work so can you clarify if pasta changed this behaviour or if pasta always worked that way? |
It worked that way since the very beginning of pasta. The term of comparison is, quoting, "podman-4.9.4-1.fc39.x86_64 with rootlessport". |
...for some definitions of "external", yes.
Right. In that case, by the way, a user can still bind ports to specific interfaces (using pasta-only options at the moment).
Yes another option, perhaps more reasonable, would be to implement an option disabling "spliced" inbound connections altogether (something like an explicit, reversed, |
Right it is arbitrary what external means here. different host or namespace. As long as different host is covered I don't see any security issues so I don't mind how it behaves
I am talking about the interface inside the container netns, that would totally depend on the application inside not on any podman/pasta options.
I guess there is good reason for the splice path, speed mostly?. I would think most users prefer that. @adelton I like to understand better your actual use case here. Why are you forwarding the port but then bind to 127.0.0.1 inside and don't want the connection to work? |
I was happily using the setup with 127.0.0.1 in the container on my Fedoras because I installed pasta a couple of releases back. And then I spent three hours investigating why the thing which does work on my Fedoras (connect to that container from the host) does not work on GitHub Actions Ubuntu runners. Searching around and in man pages did not suggest it should be happening. It's only when I got a fresh Fedora 39 VM and tried the setup from scratch when I got the difference in behaviour demonstrated. So it's not as much what I want, I frankly don't mind the rootless pasta behaviour. It's mainly the inconsistency with both the rootful and rootlessport behaviour that has bitten me. And given this could lead to some endpoints now being exposed where they previously were not, so a potential security overlap, I thought I'd report it as an issue. I guess some note in the documentation would work if functional parity with rootful setups is not desired or not practical. |
Yes, that, as we get pretty much host-native throughput on that path. Maybe we'll achieve something similar with VDUSE which might make that more or less obsolete (the tap interface is quite a hurdle for performance), but it will take time.
Oh, so things that are working now weren't working before. The inconsistency stands and needs to be solved somehow, but this is another bit of information showing us that we need to be careful to avoid breaking things. |
even on podman with rootlesskit. Related to containers/podman#24045..
even on podman with rootlesskit. Related to containers/podman#24045.
[snip]
That might be true, but I think it's missing the point. The question is not about host loopback, but about container loopback. The point is that things bound to container loopback are accessible from outside the container, which is indeed surprising. It's mitigated because they're only accessible from host loopback, but it's still odd, and arguably is a security problem because it allows unrelated users on the host to access ports that the container thinks are private to itself. However, I don't think it's as hard to fix as you outline. This is AFAICT, entirely about "spliced" connections - that's the only way we even can reach loopback bound ports within the container. So, I think all we need to do to fix it is:
Because There are some real questions about access to the host loopback address via outbound spliced connections, but that's not what this issue is about. |
The various pasta port forwarding tests run a socat server inside a container, then connect to it from a socat client on the host. Currently we have the server bind to the same specific address within the container as we connect to on the host. That's not quite what we want. For "tap" tests where the traffic goes over pasta's L2 link to the container it's fine, though unnecessary. For "loopback" tests where traffic is forwarded by pasta at the L4 socket level, however, it's not quite right. In this case the address used is either 127.0.0.1 or ::. That's correct and as needed for the host side address we're connecting to. However on the container side, this only works because of an odd and arguably undesirable behaviour of pasta: we use the fact that we have an L4 socket within the container to make such "spliced" L4 connections appear as if they come from loopback within the container. A container will generally expect it's loopback address to be only accessible from within the container, and this odd behaviour may be changed in pasta in future. In any case, the binding of the container side server is unnecessary, so simply remove it. Link: containers#24045 Signed-off-by: David Gibson <[email protected]>
Well, it's about both in the sense I meant (and thought was desirable... and maybe it even is): you connect to host's loopback, and if it's mapped, it maps to the container's loopback as well. The other way, with
Not to me! We splice using the loopback interface in the container. I think it's also implied by the "Handling of local traffic in pasta" section of the man page, even though surely not explicit.
...not so clearly in my opinion: the ports are exposed with
That's a nice idea, and I guess it has relatively low chances of breaking things, but they would still break for users who assumed that binding to 127.0.0.1 in the container and exposing that port would make it visible from the host (see #24045 (comment)).
Right.
Sure, that's another matter. But accessing ports bound to a loopback address in the container should be at least optional. I'm almost convinced we can make it an opt-in and it's unlikely that we'll break any usage, but we need to have a way to fix that quickly, in case. |
Patch series and related discussion at https://archives.passt.top/passt-dev/[email protected]/ by the way. |
Well, obviously there could be usecases, but I really don't think this would be the expected behaviour. It's so completely unlike any other networking model (physical, rootful, and it seems slirp too). If you really want to share a
I don't really think it's implied by that. As my draft patch demonstrates, it certainly need not be the case, even with traffic over
Yeah, that also mitigates it. The container could still have different servers running on the same port on loopback and non-loopback addresses. Or it could have a server on
Well, sure, but I'd argue that was a flawed assumption that just happened to work because of a pasta bug. Wtiness its total non-portability.
Sure, it's pretty easy to make it an option. |
It's not shared in general, it's just one port being forwarded, for a specific Layer-4 protocol.
...unless you see the "spliced" path as a loopback bypass, which is, at least, what I had in mind when I implemented it, and how I use it sometimes. This plus #24045 (comment) already makes two users...
True, in this case it's definitely surprising.
It's a bug I added tests for... I'd call it a feature, really. Originally, I was thinking of adding something symmetric to
Okay, yes, I would be fine with it, and I'm convinced it's an improvement over the current situation especially given the scenario where one might bind the same port to loopback and non-loopback addresses in the container, which is not supported at the moment. |
I confirm that in my case, rather than explicitly assuming something about the 127.0.0.1 in the container exposure, I did not really think of it when it happened to work on my Fedora setup without modifications. The use of 127.0.0.1 is the default which Kind uses to expose its API server by default, and in my work on https://github.com/adelton/kind-in-pod I just went with the minimal changes to the defaults. |
Patches to change this behaviour are now merged into pasta upstream and should be in the next release. |
This should be fixed now in 2024_10_30.ee7d0b6 and its corresponding Fedora 40 update. |
+1 |
Issue Description
With previous rootless podman setups, having a process listen on 127.0.0.1 in the container and publishing that port to the host did not expose that process to the host. Or rather, while connection could be made, it was killed right away (Connection reset by peer when tested with curl). This was very similar to the the rootful podman behaviour (Couldn't connect to server).
With podman-5.2.2-1.fc40.x86_64 with passt-0^20240906.g6b38f07-1.fc40.x86_64 I see a change of behaviour -- the process in the container is reachable on the published port on the host even if the process in the container is supposed to only listen on 127.0.0.1.
Steps to reproduce the issue
Steps to reproduce the issue
podman build -t localhost/django .
podman rm -f django ; podman run --name django -d -p 8000:8000 localhost/django 127.0.0.1:8000
curl -s http://127.0.0.1:8000/ | head
curl
does not show anything,curl http://127.0.0.1:8000/
Describe the results you received
With rootless podman-5.2.2-1.fc40.x86_64 with passt-0^20240906.g6b38f07-1.fc40.x86_64 I see
Describe the results you expected
With rootless podman-4.9.4-1.fc39.x86_64 with rootlessport I see
With rootful setup, both podman-4.9.4-1.fc39.x86_64 and podman-5.2.2-1.fc40.x86_64, I get
podman info output
Podman in a container
No
Privileged Or Rootless
Rootless
Upstream Latest Release
No
Additional environment details
Tested with stock Fedora packages.
Additional information
Deterministic on a fresh Fedora server installation.
The text was updated successfully, but these errors were encountered: