Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inspecting a container's exposed ports returns an empty HostIp #17780

Closed
cristianrgreco opened this issue Mar 14, 2023 · 6 comments
Closed

Inspecting a container's exposed ports returns an empty HostIp #17780

cristianrgreco opened this issue Mar 14, 2023 · 6 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. network Networking related issue or feature

Comments

@cristianrgreco
Copy link

Issue Description

I am creating a container with exposed ports. To find the container ports, I inspect the started container. I would expect a list where the host IPs represent IPv4/IPv6 bindings, for example Docker returns:

[
  { "HostIp": "0.0.0.0", "HostPort": "50000" },
  { "HostIp": "::1", "HostPort": "50000" },
]

Note that the HostPort doesn't always match.

From this I can pick the appropriate port depending on IPv4/6 preference. However instead of returning such a list, Podman in the same case is returning:

[
  { "HostIp": "", "HostPort": "50000" },
]

I do not know what is the status of Podman IPv6 support, regardless I guess the HostIP should be either 0.0.0.0 or ::1?

Steps to reproduce the issue

  1. Create a container with exposed ports.
  2. Inspect the container.
  3. Observe that the HostIp is an empty string.

Describe the results you received

{ "HostIp": "", "HostPort": "50000" }

Describe the results you expected

{ "HostIp": "<something>", "HostPort": "50000" }

podman info output

host:
  arch: amd64
  buildahVersion: 1.29.0
  cgroupControllers:
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon_2:2.1.7-0debian9999+obs15.6_amd64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.7, commit: '
  cpuUtilization:
    idlePercent: 87.46
    systemPercent: 5.69
    userPercent: 6.84
  cpus: 2
  distribution:
    codename: jammy
    distribution: ubuntu
    version: "22.04"
  eventLogger: journald
  hostname: fv-az646-90
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 123
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1001
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
  kernel: 5.15.0-1034-azure
  linkmode: dynamic
  logDriver: journald
  memFree: 4857856000
  memTotal: 7281278976
  networkBackend: netavark
  ociRuntime:
    name: crun
    package: crun_101:1.8.1-0debian9999+obs52.3_amd64
    path: /usr/bin/crun
    version: |-
      crun version 1.8.1
      commit: f8a096be060b22ccd3d5f3ebe44108517fbf6c30
      rundir: /run/user/1001/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/1001/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns_1.0.1-2_amd64
    version: |-
      slirp4netns version 1.0.1
      commit: 6a7b16babc95b6a3056b33fb45b74a6f62262dd4
      libslirp: 4.6.1
  swapFree: 4294963200
  swapTotal: 4294963200
  uptime: 0h 4m 56.00s
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries:
  search:
  - docker.io
  - quay.io
store:
  configFile: /home/runner/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/runner/.local/share/containers/storage
  graphRootAllocated: 89297309696
  graphRootUsed: 58336636928
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 0
  runRoot: /run/user/1001/containers
  transientStore: false
  volumePath: /home/runner/.local/share/containers/storage/volumes
version:
  APIVersion: 4.4.2
  Built: 0
  BuiltTime: Thu Jan  1 00:00:00 1970
  GitCommit: ""
  GoVersion: go1.19.6
  Os: linux
  OsArch: linux/amd64
  Version: 4.4.2

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

Yes

Additional environment details

DOCKER_HOST=unix:///run/user/$(id -u)/podman/podman.sock

Additional information

No response

@Luap99
Copy link
Member

Luap99 commented Mar 14, 2023

How are the ports configured when you create the container?
I would expect at least both 0.0.0.0 and :: but not ::1 for ipv6 unless you explicitly set that.

For podman no host ip should mean dual stack by default.

@cristianrgreco
Copy link
Author

Hi @Luap99, this is how we're configuring the ports when creating the container:

{ [exposedPort] = [{ HostPort: "0" }] };

We do this to get a dynamically assigned host port

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented Jul 29, 2023

@Luap99 any follow up on this?

@Luap99
Copy link
Member

Luap99 commented Jul 31, 2023

showing both ::1 and 0.0.0.0 simply makes no sense to me, that totally contradicts each other so I don't know what docker is doing with that.

One can either bind dual stack (no host ip which binds every host address), only ipv4 (0.0.0.0 which means all ipv4 addresses) or only ipv6 (:: which means all ipv6 addresses).
So one can either show a single port with no hostip like we do currently, or two port mapping with a host ip of 0.0.0.0 and ::

Since you do not request a host ip I do not see where the ::1 ipv6 localhost address comes from and why you would expect that one.

@Luap99 Luap99 added the network Networking related issue or feature label Oct 19, 2023
ardan-bkennedy pushed a commit to ardanlabs/service that referenced this issue Mar 6, 2024
This patch adds support for running service using Podman. There are a few notable changes that were required to enable
Podman support, described below.

Podman keeps `HostIP` empty instead of using `0.0.0.0`. This behavior required an update to `extractIPPort`. More
information in containers/podman#17780.

Podman prefers to use fully-qualified container images (i.e., `example.com/foo/bar:latest`) but will perform
[short-name aliasing](https://github.com/containers/image/blob/main/docs/containers-registries.conf.5.md#short-name-aliasing)
for unqualified container images. When an unqualified image such as `foo/bar:latest` is loaded into a kind cluster it
will be stored as `localhost/foo/bar:latest`.

Using backticks for command substitution isn't POSIX compatible. Switched the backticks to `$()` instead to support
alternative shells (e.g., `/bin/fish`).
@Luap99
Copy link
Member

Luap99 commented Jun 15, 2024

I think that was changed in podman 5.0

@Luap99 Luap99 closed this as completed Jun 15, 2024
@stale-locking-app stale-locking-app bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 14, 2024
@stale-locking-app stale-locking-app bot locked as resolved and limited conversation to collaborators Sep 14, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. network Networking related issue or feature
Projects
None yet
Development

No branches or pull requests

3 participants