Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compose not working after upgrade to 4.6.2 #19930

Closed
p-fruck opened this issue Sep 11, 2023 · 32 comments
Closed

Compose not working after upgrade to 4.6.2 #19930

p-fruck opened this issue Sep 11, 2023 · 32 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@p-fruck
Copy link
Contributor

p-fruck commented Sep 11, 2023

Issue Description

After upgrading to podman 4.6.2 I cannot use docker-compose anymore to launch any containers. Launching containers using podman run works fine however.

Steps to reproduce the issue

Steps to reproduce the issue

  1. Create a simple compose file like this:
version: "3"
services:
  app:
    image: nginx:alpine
  1. Ensure podman socket is exported and activated:
export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock
systemctl --user enable --now podman.socket
  1. Run docker-compose up

Describe the results you received

The following error occurs:

Error response from daemon: crun: [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

write to `/proc/self/oom_score_adj`: Permission denied: OCI permission denied

Describe the results you expected

Should just spawn an nginx image. This still works by running podman run --rm nginx:alpine

podman info output

host:
  arch: amd64
  buildahVersion: 1.31.2
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.7-2.fc38.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.7, commit: '
  cpuUtilization:
    idlePercent: 92.62
    systemPercent: 2.15
    userPercent: 5.23
  cpus: 8
  databaseBackend: boltdb
  distribution:
    distribution: fedora
    variant: silverblue
    version: "38"
  eventLogger: journald
  freeLocks: 2042
  hostname: spectre
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 524288
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 524288
      size: 65536
  kernel: 6.4.14-200.fc38.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 322568192
  memTotal: 7916376064
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.7.0-1.fc38.x86_64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.7.0
    package: netavark-1.7.0-1.fc38.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.7.0
  ociRuntime:
    name: crun
    package: crun-1.9-1.fc38.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.9
      commit: a538ac4ea1ff319bcfe2bf81cb5c6f687e2dc9d3
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt-0^20230823.ga7e4bfb-1.fc38.x86_64
    version: |
      pasta 0^20230823.ga7e4bfb-1.fc38.x86_64
      Copyright Red Hat
      GNU Affero GPL version 3 or later <https://www.gnu.org/licenses/agpl-3.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.1-1.fc38.x86_64
    version: |-
      slirp4netns version 1.2.1
      commit: 09e31e92fa3d2a1d3ca261adaeb012c8d75a8194
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.3
  swapFree: 7821848576
  swapTotal: 7915696128
  uptime: 1h 0m 17.00s (Approximately 0.04 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /var/home/philipp/.config/containers/storage.conf
  containerStore:
    number: 1
    paused: 0
    running: 0
    stopped: 1
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/home/philipp/.local/share/containers/storage
  graphRootAllocated: 998483427328
  graphRootUsed: 230994952192
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 1
  runRoot: /run/user/1000/containers
  transientStore: false
  volumePath: /var/home/philipp/.local/share/containers/storage/volumes
version:
  APIVersion: 4.6.2
  Built: 1693251588
  BuiltTime: Mon Aug 28 21:39:48 2023
  GitCommit: ""
  GoVersion: go1.20.7
  Os: linux
  OsArch: linux/amd64
  Version: 4.6.2


### Podman in a container

No

### Privileged Or Rootless

Rootless

### Upstream Latest Release

Yes

### Additional environment details

I already ran `podman system migrate`, `podman system reset` and `rm -rf ~/.local/share/containers`, none of which had any impact

Docker Compose version v2.17.2

### Additional information

_No response_
@p-fruck p-fruck added the kind/bug Categorizes issue or PR as related to a bug. label Sep 11, 2023
@rejahn
Copy link

rejahn commented Sep 11, 2023

I can reproduce on fedora 38 with podman version 4.6.2 and docker-compose version 2.20.3

@rhatdan
Copy link
Member

rhatdan commented Sep 11, 2023

@mheon PTAL

@mheon
Copy link
Member

mheon commented Sep 11, 2023

Error response from daemon: crun: [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

This is a nonfatal error on rootless (it's always happened, but always just printed to the journal and was ignored). So either it suddenly became fatal, or the real error is being eaten. Can you stop podman.socket and run podman system service -t 0 --log-level=debug instead (it's what the systemd service ultimately runs, plus debug logging) and see if any errors are printed after compose runs?

@p-fruck
Copy link
Contributor Author

p-fruck commented Sep 11, 2023

This is the debug log of docker-compose start for the given compose file. Only suspicious message I encountered (except of the previously mentioned messages) is the Received: -1 after the netavark setup

time="2023-09-11T22:43:51+02:00" level=debug msg="IdleTracker:new 0m+0h/0t connection(s)" X-Reference-Id=0xc0007be008
time="2023-09-11T22:43:51+02:00" level=debug msg="IdleTracker:active 0m+0h/1t connection(s)" X-Reference-Id=0xc0007be008
@ - - [11/Sep/2023:22:43:51 +0200] "HEAD /_ping HTTP/1.1" 200 0 "" "Docker-Client/unknown-version (linux)"
time="2023-09-11T22:43:51+02:00" level=debug msg="IdleTracker:idle 1m+0h/1t connection(s)" X-Reference-Id=0xc0007be008
time="2023-09-11T22:43:51+02:00" level=debug msg="IdleTracker:active 1m+0h/1t connection(s)" X-Reference-Id=0xc0007be008
time="2023-09-11T22:43:51+02:00" level=debug msg="Looking up image \"433dbc17191a7830a9db6454bcc23414ad36caecedab39d1e51d41083ab1d629\" in local containers storage"
time="2023-09-11T22:43:51+02:00" level=debug msg="Trying \"433dbc17191a7830a9db6454bcc23414ad36caecedab39d1e51d41083ab1d629\" ..."
time="2023-09-11T22:43:51+02:00" level=debug msg="parsed reference into \"[overlay@/var/home/philipp/.local/share/containers/storage+/run/user/1000/containers]@433dbc17191a7830a9db6454bcc23414ad36caecedab39d1e51d41083ab1d629\""
time="2023-09-11T22:43:51+02:00" level=debug msg="Found image \"433dbc17191a7830a9db6454bcc23414ad36caecedab39d1e51d41083ab1d629\" as \"433dbc17191a7830a9db6454bcc23414ad36caecedab39d1e51d41083ab1d629\" in local containers storage"
time="2023-09-11T22:43:51+02:00" level=debug msg="Found image \"433dbc17191a7830a9db6454bcc23414ad36caecedab39d1e51d41083ab1d629\" as \"433dbc17191a7830a9db6454bcc23414ad36caecedab39d1e51d41083ab1d629\" in local containers storage ([overlay@/var/home/philipp/.local/share/containers/storage+/run/user/1000/containers]@433dbc17191a7830a9db6454bcc23414ad36caecedab39d1e51d41083ab1d629)"
@ - - [11/Sep/2023:22:43:51 +0200] "GET /v1.41/containers/json?all=1&filters=%7B%22label%22%3A%7B%22com.docker.compose.oneoff%3DFalse%22%3Atrue%2C%22com.docker.compose.project%3Dtest%22%3Atrue%7D%7D HTTP/1.1" 200 1407 "" "Docker-Client/unknown-version (linux)"
time="2023-09-11T22:43:51+02:00" level=debug msg="IdleTracker:idle 1m+0h/1t connection(s)" X-Reference-Id=0xc0007be008
time="2023-09-11T22:43:51+02:00" level=debug msg="IdleTracker:active 1m+0h/1t connection(s)" X-Reference-Id=0xc0007be008
time="2023-09-11T22:43:51+02:00" level=debug msg="Cached value indicated that idmapped mounts for overlay are not supported"
time="2023-09-11T22:43:51+02:00" level=debug msg="Check for idmapped mounts support "
time="2023-09-11T22:43:51+02:00" level=debug msg="overlay: mount_data=lowerdir=/var/home/philipp/.local/share/containers/storage/overlay/l/6M6AHOY3JC2AMCQVUNFP354NV6:/var/home/philipp/.local/share/containers/storage/overlay/l/R4Z4QTOMMDQY4X4ARJ5XL53YXP:/var/home/philipp/.local/share/containers/storage/overlay/l/VTS64L4U7YNP6GCROTH2EOSF74:/var/home/philipp/.local/share/containers/storage/overlay/l/ZKRKA7L4SAHZ7CSDDRP4FZIV3N:/var/home/philipp/.local/share/containers/storage/overlay/l/KQGIW6IWVP2TZZX44HPGTGBHPF:/var/home/philipp/.local/share/containers/storage/overlay/l/JIYAT3EPP4ATZ73AOYCFTJZDU2:/var/home/philipp/.local/share/containers/storage/overlay/l/5XWI5U27TF43OYNGQJL7HYCTC7:/var/home/philipp/.local/share/containers/storage/overlay/l/OOI6XHAQCXHYNEVWWDX7C7JRM3,upperdir=/var/home/philipp/.local/share/containers/storage/overlay/078711c1cfa8eb9da2b57232154c6720295b161eb6d610aa9451c7efc823361b/diff,workdir=/var/home/philipp/.local/share/containers/storage/overlay/078711c1cfa8eb9da2b57232154c6720295b161eb6d610aa9451c7efc823361b/work,,userxattr,context=\"system_u:object_r:container_file_t:s0:c458,c539\""
time="2023-09-11T22:43:51+02:00" level=debug msg="Made network namespace at /run/user/1000/netns/netns-5a70bcc3-ce29-e06a-d6b5-c4be348954bf for container 1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7"
time="2023-09-11T22:43:51+02:00" level=debug msg="creating rootless network namespace with name \"rootless-netns-366f197f86d21c9c17ef\""
time="2023-09-11T22:43:51+02:00" level=debug msg="Mounted container \"1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7\" at \"/var/home/philipp/.local/share/containers/storage/overlay/078711c1cfa8eb9da2b57232154c6720295b161eb6d610aa9451c7efc823361b/merged\""
time="2023-09-11T22:43:51+02:00" level=debug msg="Created root filesystem for container 1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7 at /var/home/philipp/.local/share/containers/storage/overlay/078711c1cfa8eb9da2b57232154c6720295b161eb6d610aa9451c7efc823361b/merged"
time="2023-09-11T22:43:51+02:00" level=debug msg="slirp4netns command: /usr/bin/slirp4netns --disable-host-loopback --mtu=65520 --enable-sandbox --enable-seccomp --enable-ipv6 -c -r 3 --netns-type=path /run/user/1000/netns/rootless-netns-366f197f86d21c9c17ef tap0"
time="2023-09-11T22:43:51+02:00" level=debug msg="found local resolver, using \"/run/systemd/resolve/resolv.conf\" to get the nameservers"
time="2023-09-11T22:43:51+02:00" level=debug msg="The path of /etc/resolv.conf in the mount ns is \"/run/systemd/resolve/stub-resolv.conf\""
time="2023-09-11T22:43:51+02:00" level=debug msg="Successfully loaded network test_default: &{test_default 69c15d74c57dd1952a1a25e266a700e54a3914ab8d776b5bd85639429ec6bad9 bridge podman1 2023-09-11 20:52:40.305670075 +0200 CEST [{{{10.89.0.0 ffffff00}} 10.89.0.1 <nil>}] [] false false true [] map[com.docker.compose.network:default com.docker.compose.project:test com.docker.compose.version:2.17.2] map[isolate:true] map[driver:host-local]}"
time="2023-09-11T22:43:51+02:00" level=debug msg="Successfully loaded 2 networks"
[DEBUG netavark::network::validation] "Validating network namespace..."
[DEBUG netavark::commands::setup] "Setting up..."
[INFO  netavark::firewall] Using iptables firewall driver
[DEBUG netavark::network::bridge] Setup network test_default
[DEBUG netavark::network::bridge] Container interface name: eth0 with IP addresses [10.89.0.5/24]
[DEBUG netavark::network::bridge] Bridge name: podman1 with IP addresses [10.89.0.1/24]
[DEBUG netavark::network::core_utils] Setting sysctl value for net.ipv4.ip_forward to 1
[DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv6/conf/eth0/autoconf to 0
[INFO  netavark::network::netlink] Adding route (dest: 0.0.0.0/0 ,gw: 10.89.0.1, metric 100)
[DEBUG netavark::firewall::varktables::types] Add extra isolate rules
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-CEDCA8372B348 created on table nat
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_ISOLATION_1 created on table filter
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_ISOLATION_2 created on table filter
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_ISOLATION_3 created on table filter
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_FORWARD created on table filter
[DEBUG netavark::firewall::varktables::helpers] rule -d 10.89.0.0/24 -j ACCEPT created on table nat and chain NETAVARK-CEDCA8372B348
[DEBUG netavark::firewall::varktables::helpers] rule ! -d 224.0.0.0/4 -j MASQUERADE created on table nat and chain NETAVARK-CEDCA8372B348
[DEBUG netavark::firewall::varktables::helpers] rule -s 10.89.0.0/24 -j NETAVARK-CEDCA8372B348 created on table nat and chain POSTROUTING
[DEBUG netavark::firewall::varktables::helpers] rule -d 10.89.0.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT created on table filter and chain NETAVARK_FORWARD
[DEBUG netavark::firewall::varktables::helpers] rule -s 10.89.0.0/24 -j ACCEPT created on table filter and chain NETAVARK_FORWARD
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-SETMARK created on table nat
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-MASQ created on table nat
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-DNAT created on table nat
[DEBUG netavark::firewall::varktables::helpers] rule -j MARK  --set-xmark 0x2000/0x2000 created on table nat and chain NETAVARK-HOSTPORT-SETMARK
[DEBUG netavark::firewall::varktables::helpers] rule -j MASQUERADE -m comment --comment 'netavark portfw masq mark' -m mark --mark 0x2000/0x2000 created on table nat and chain NETAVARK-HOSTPORT-MASQ
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-DNAT -m addrtype --dst-type LOCAL created on table nat and chain PREROUTING
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-DNAT -m addrtype --dst-type LOCAL created on table nat and chain OUTPUT
[DEBUG netavark::dns::aardvark] Spawning aardvark server
[DEBUG netavark::dns::aardvark] start aardvark-dns: ["systemd-run", "-q", "--scope", "--user", "/usr/libexec/podman/aardvark-dns", "--config", "/run/user/1000/containers/networks/aardvark-dns", "-p", "53", "run"]
[DEBUG netavark::commands::setup] {
        "test_default": StatusBlock {
            dns_search_domains: Some(
                [
                    "dns.podman",
                ],
            ),
            dns_server_ips: Some(
                [
                    10.89.0.1,
                ],
            ),
            interfaces: Some(
                {
                    "eth0": NetInterface {
                        mac_address: "86:d9:b2:39:3b:bc",
                        subnets: Some(
                            [
                                NetAddress {
                                    gateway: Some(
                                        10.89.0.1,
                                    ),
                                    ipnet: 10.89.0.5/24,
                                },
                            ],
                        ),
                    },
                },
            ),
        },
    }
[DEBUG netavark::commands::setup] "Setup complete"
time="2023-09-11T22:43:52+02:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription"
time="2023-09-11T22:43:52+02:00" level=debug msg="Setting Cgroups for container 1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7 to user.slice:libpod:1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7"
time="2023-09-11T22:43:52+02:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d"
time="2023-09-11T22:43:52+02:00" level=debug msg="Workdir \"/\" resolved to host path \"/var/home/philipp/.local/share/containers/storage/overlay/078711c1cfa8eb9da2b57232154c6720295b161eb6d610aa9451c7efc823361b/merged\""
time="2023-09-11T22:43:52+02:00" level=debug msg="Created OCI spec for container 1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7 at /var/home/philipp/.local/share/containers/storage/overlay-containers/1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7/userdata/config.json"
time="2023-09-11T22:43:52+02:00" level=debug msg="/usr/bin/conmon messages will be logged to syslog"
time="2023-09-11T22:43:52+02:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c 1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7 -u 1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7 -r /usr/bin/crun -b /var/home/philipp/.local/share/containers/storage/overlay-containers/1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7/userdata -p /run/user/1000/containers/overlay-containers/1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7/userdata/pidfile -n test-app-1 --exit-dir /run/user/1000/libpod/tmp/exits --full-attach -s -l journald --log-level debug --syslog --conmon-pidfile /run/user/1000/containers/overlay-containers/1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/home/philipp/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg  --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /var/home/philipp/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg boltdb --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7]"
time="2023-09-11T22:43:52+02:00" level=info msg="Running conmon under slice user.slice and unitName libpod-conmon-1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7.scope"
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

time="2023-09-11T22:43:52+02:00" level=debug msg="Received: -1"
time="2023-09-11T22:43:52+02:00" level=debug msg="Cleaning up container 1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7"
time="2023-09-11T22:43:52+02:00" level=debug msg="Tearing down network namespace at /run/user/1000/netns/netns-5a70bcc3-ce29-e06a-d6b5-c4be348954bf for container 1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7"
time="2023-09-11T22:43:52+02:00" level=debug msg="The path of /etc/resolv.conf in the mount ns is \"/run/systemd/resolve/stub-resolv.conf\""
[DEBUG netavark::commands::teardown] "Tearing down.."
[INFO  netavark::firewall] Using iptables firewall driver
[INFO  netavark::network::bridge] removing bridge podman1
[DEBUG netavark::firewall::varktables::types] Add extra isolate rules
[DEBUG netavark::commands::teardown] "Teardown complete"
time="2023-09-11T22:43:52+02:00" level=debug msg="Cleaning up rootless network namespace"
time="2023-09-11T22:43:52+02:00" level=debug msg="Unmounted container \"1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7\""
time="2023-09-11T22:43:52+02:00" level=info msg="Request Failed(Internal Server Error): crun: [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied\n\nwrite to `/proc/self/oom_score_adj`: Permission denied: OCI permission denied"
@ - - [11/Sep/2023:22:43:51 +0200] "POST /v1.41/containers/1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7/start HTTP/1.1" 500 223 "" "Docker-Client/unknown-version (linux)"
time="2023-09-11T22:43:52+02:00" level=debug msg="IdleTracker:idle 1m+0h/1t connection(s)" X-Reference-Id=0xc0007be008
time="2023-09-11T22:43:52+02:00" level=debug msg="IdleTracker:closed 1m+0h/1t connection(s)" X-Reference-Id=0xc0007be008
time="2023-09-11T22:43:56+02:00" level=info msg="Received shutdown signal \"interrupt\", terminating!" PID=17007
time="2023-09-11T22:43:56+02:00" level=info msg="Invoking shutdown handler \"service\"" PID=17007
time="2023-09-11T22:43:56+02:00" level=debug msg="API service forced shutdown, ignoring timeout Duration"
time="2023-09-11T22:43:56+02:00" level=debug msg="API service shutdown, 0/1 connection(s)"
time="2023-09-11T22:43:56+02:00" level=debug msg="API service forced shutdown, ignoring timeout Duration"
time="2023-09-11T22:43:56+02:00" level=debug msg="Completed shutdown handler \"service\", duration 0s" PID=17007
time="2023-09-11T22:43:56+02:00" level=info msg="Invoking shutdown handler \"libpod\"" PID=17007

@mheon
Copy link
Member

mheon commented Sep 11, 2023

Anything in the journal logs from Conmon?

@p-fruck
Copy link
Contributor Author

p-fruck commented Sep 11, 2023

Not much information that isn't already contained in the podman logs:

Sep 12 01:04:31 spectre podman[26829]: time="2023-09-12T01:04:31+02:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c 49515f660695bc956bceac95ecf9c03be9b121438ae611e00744b7a559e4261b -u 49515f660695bc956bceac 95ecf9c03be9b121438ae611e00744b7a559e4261b -r /usr/bin/crun -b /var/home/philipp/.local/share/containers/storage/overlay-containers 49515f660695bc956bceac95ecf9c03be9b121438ae611e00744b7a559e4261b/userdata -p /run/user/1000/containers/overlay-containers/49515f660695bc956bceac95ecf9c03be9b121438ae611e00744b7a559e4261b/userdata/pidfile -n test-app-1 --exit-dir /run/user/1000/libpod/tmp/exits --full-attach -s -l journald --log-level debug --syslog --conmon-pidfile /run/user/1000/containers/overlay-containers/49515f660695bc956bceac95ecf9c03be9b121438ae611e00744b7a559e4261b/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/home/philipp/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg  --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /var/home/philipp/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg boltdb --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 49515f660695bc956bceac95ecf9c03be9b121438ae611e00744b7a559e4261b]"
Sep 12 01:04:31 spectre podman[26829]: time="2023-09-12T01:04:31+02:00" level=info msg="Running conmon under slice user.slice and unitName libpod-conmon-49515f660695bc956bceac95ecf9c03be9b121438ae611e00744b7a559e4261b.scope"              
Sep 12 01:04:31 spectre podman[26917]: [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied                                                                                                                             
Sep 12 01:04:31 spectre conmon[26917]: conmon 49515f660695bc956bce <ndebug>: failed to write to /proc/self/oom_score_adj: Permission denied                                                                                                   
Sep 12 01:04:31 spectre systemd[1979]: Started libpod-conmon-49515f660695bc956bceac95ecf9c03be9b121438ae611e00744b7a559e4261b.scope.                                                                                                          
░░ Subject: A start job for unit UNIT has finished successfully                                                                                                                                                                               
░░ Defined-By: systemd                                                                                                                                                                                                                        
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel                                                                                                                                                                      
░░                                                                                                                                                                                                                                            
░░ A start job for unit UNIT has finished successfully.                                                                                                                                                                                       
░░                                                                                                                                                                                                                                            
░░ The job identifier is 2824.                                                                                                                                                                                                                
Sep 12 01:04:31 spectre conmon[26919]: conmon 49515f660695bc956bce <ndebug>: addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/12/attach}                                                                                                       
Sep 12 01:04:31 spectre conmon[26919]: conmon 49515f660695bc956bce <ndebug>: terminal_ctrl_fd: 12                                                                                                                                             
Sep 12 01:04:31 spectre conmon[26919]: conmon 49515f660695bc956bce <ndebug>: winsz read side: 15, winsz write side: 15 
Sep 12 01:04:31 spectre conmon[26921]: conmon 49515f660695bc956bce <ndebug>: failed to write to /proc/self/oom_score_adj: Permission denied
Sep 12 01:04:31 spectre conmon[26919]: conmon 49515f660695bc956bce <nwarn>: runtime stderr: [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied
                                        
                                       write to `/proc/self/oom_score_adj`: Permission denied                          
Sep 12 01:04:31 spectre conmon[26919]: conmon 49515f660695bc956bce <error>: Failed to create container: exit status 1  
Sep 12 01:04:31 spectre podman[26829]: time="2023-09-12T01:04:31+02:00" level=debug msg="Received: -1"

Retrieved by simply running journalctl --user -xef. Are there any options to further debug conmon?

@Luap99
Copy link
Member

Luap99 commented Sep 12, 2023

Can you try to downgrade crun? This sounds like crun issue containers/crun#1302.
cc @giuseppe

@giuseppe
Copy link
Member

it is fixed by #19843

@giuseppe
Copy link
Member

crun was previously ignoring oom_score_adj when it was set to 0, which is the wrong thing to do.

The root cause is docker-compose hardcoding the value to 0, so we added the workaround to Podman.

As a temporary fix you can downgrade crun, alternatively tweak systemd to run user sessions with oom_score_adj=0

@p-fruck
Copy link
Contributor Author

p-fruck commented Sep 12, 2023

Thanks @Luap99 @giuseppe, I rolled back to a previous ostree state using crun 1.8.7 which fixed the issue. Looking forward to podman 4.6.3 being released 😄

@Rtapy
Copy link

Rtapy commented Sep 19, 2023

Can you try to downgrade crun? This sounds like crun issue containers/crun#1302. cc @giuseppe

Hi, thanks you for solve this problem
how can i downgrade crun to 1.8.7 ?

@cardil
Copy link

cardil commented Sep 20, 2023

Just FYI: I see the same on the latest Fedora 38, while trying to use testcontainers.org library with podman's user socket.

@anarute
Copy link

anarute commented Sep 20, 2023

how can i downgrade crun to 1.8.7 ?

@Rtapy you can run sudo dnf downgrade crun

@cardil
Copy link

cardil commented Sep 20, 2023

how can i downgrade crun to 1.8.7 ?

For Fedora 38, you can install crun 1.8.7 with:

# dnf install -y https://kojipkgs.fedoraproject.org//packages/crun/1.8.7/1.fc38/x86_64/crun-1.8.7-1.fc38.x86_64.rpm

@bomgar
Copy link

bomgar commented Sep 21, 2023

If you have the old package in the pacman cache (arch linux)

sudo pacman -U file:///var/cache/pacman/pkg/crun-1.8.7-1-x86_64.pkg.tar.zst

@abraithwaite
Copy link

Hi! I'm using podman with podman machine on MacOS. Trying to figure out the right way to rollback crun.

The default machine is Fedora CoreOS (maybe this can be swapped, but like to keep things as vanilla as possible).

I haven't used CoreOS much at all since I prefer for this stuff to be transparent, but I looked at the docs for rolling back the image and found this command, but there's nothing to rollback to.

[root@localhost core]# rpm-ostree rollback -r
error: No rollback deployment found

Found via:
https://docs.fedoraproject.org/en-US/fedora-coreos/auto-updates/#_manual_rollbacks

Any recommendations for a workaround on MacOS?

17:55:56 $ podman version
Client:       Podman Engine
Version:      4.6.1
API Version:  4.6.1
Go Version:   go1.20.7
Git Commit:   f3069b3ff48e30373c33b3f5976f15abf8cfee20
Built:        Thu Aug 10 11:13:43 2023
OS/Arch:      darwin/arm64

Server:       Podman Engine
Version:      4.6.2
API Version:  4.6.2
Go Version:   go1.20.7
Built:        Tue Sep 12 13:07:26 2023
OS/Arch:      linux/arm64

I also tried downgrading podman and resetting the machine but the image appears to have the new engine installed anyway. Maybe theres a way I can specify which image version I want to download with podman? I haven't found that yet.

@abraithwaite
Copy link

abraithwaite commented Sep 23, 2023

Found this:

https://fedoraproject.org/coreos/release-notes/?arch=x86_64&stream=stable

Had to manually download the old image using url hacking because there doesn't seem to be links from the release notes pages.

curl -LO https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/38.20230819.3.0/aarch64/fedora-coreos-38.20230819.3.0-qemu.aarch64.qcow2.xz
podman machine init --image-path fedora-coreos-38.20230819.3.0-qemu.aarch64.qcow2.xz

That worked! 🎉

Freaking love the simplicity of podman. Even with obscure error messages like the one thrown, wasn't so bad to dig into the issue and find out how to fix it. Great work team!

Maybe sometime I'll learn rpm-ostree, but I'll save that for a later date. :-)

Hopefully I don't have to worry about auto-updates?. Of course it updated before I finished writing the comment...

Time to try again. Raced podman machine init with podman machine ssh then ran this when I got a shell:

[core@localhost ~]$ sudo systemctl disable --now zincati.service

Seems to have worked 🤞.

[core@localhost ~]$ sudo systemctl status zincati
○ zincati.service - Zincati Update Agent
     Loaded: loaded (/usr/lib/systemd/system/zincati.service; disabled; preset: enabled)
    Drop-In: /usr/lib/systemd/system/service.d
             └─10-timeout-abort.conf
     Active: inactive (dead) since Fri 2023-09-22 18:14:57 PDT; 15s ago
   Duration: 1.821s
       Docs: https://github.com/coreos/zincati
    Process: 1949 ExecStart=/usr/libexec/zincati agent ${ZINCATI_VERBOSITY} (code=exited, status=0/SUCCESS)
   Main PID: 1949 (code=exited, status=0/SUCCESS)
        CPU: 97ms

Sep 22 18:14:55 localhost.localdomain zincati[1949]: [INFO  zincati::update_agent::actor] registering as the update driver for rpm-ostree
Sep 22 18:14:55 localhost.localdomain zincati[1949]: [INFO  zincati::update_agent::actor] initialization complete, auto-updates logic enabled
Sep 22 18:14:55 localhost.localdomain zincati[1949]: [INFO  zincati::strategy] update strategy: immediate
Sep 22 18:14:55 localhost.localdomain zincati[1949]: [INFO  zincati::update_agent::actor] reached steady state, periodically polling for updates
Sep 22 18:14:55 localhost.localdomain systemd[1]: Started zincati.service - Zincati Update Agent.
Sep 22 18:14:56 localhost.localdomain zincati[1949]: [INFO  zincati::cincinnati] current release detected as not a dead-end
Sep 22 18:14:56 localhost.localdomain zincati[1949]: [INFO  zincati::update_agent::actor] target release '38.20230902.3.0' selected, proceeding to stage it
Sep 22 18:14:57 localhost.localdomain systemd[1]: Stopping zincati.service - Zincati Update Agent...
Sep 22 18:14:57 localhost.localdomain systemd[1]: zincati.service: Deactivated successfully.
Sep 22 18:14:57 localhost.localdomain systemd[1]: Stopped zincati.service - Zincati Update Agent.

@mheon
Copy link
Member

mheon commented Sep 27, 2023

Podman 4.7.0 is releasing today with a fix allowing the most recent crun version to be used. Closing as such.

SoMuchForSubtlety added a commit to SoMuchForSubtlety/main that referenced this issue Sep 28, 2023
@ncdc
Copy link

ncdc commented Sep 28, 2023

@ankon it's the same issue, I just didn't include the log from the podman vm

@ncdc
Copy link

ncdc commented Sep 28, 2023

(this is how it manifests client-side)

@mheon
Copy link
Member

mheon commented Sep 28, 2023

4.7 should land in Fedora stable in the next few days, and from there will land in FCOS in at most 2 weeks (could be sooner, AFAIK they run on a 2 week cycle over there).

@Gyromancer
Copy link

FYI this is still an issue with podman 4.7 and fedora-coreos-38.20230918.2.0-qemu.x86_64.qcow2.xz.

I also upgraded to Podman 4.7.0 and am still receiving the crun: [common.d] failed to write to /proc/self/oom_score_adj: Permission denied error.

@Luap99
Copy link
Member

Luap99 commented Sep 29, 2023

As stated above you need v4.7 on the server side, you can check with podman version.

@Gyromancer
Copy link

Gyromancer commented Sep 29, 2023

As stated above you need v4.7 on the server side, you can check with podman version.

I'm on Bazzite, not CoreOS. podman -v shows 4.7.0.

Julian added a commit to bowtie-json-schema/bowtie that referenced this issue Sep 29, 2023
Let's see if this fragile nonsense breaks again.

What was here though was hiding containers/podman#19930
(i.e. a real 500 error coming from a podman bug which was
being reported as the image not existing).
@andrewgdunn
Copy link

@mheon do you know when this hits centos appstream?

@mheon
Copy link
Member

mheon commented Oct 5, 2023

4.7 probably does not hit CentOS stream since it's not destined for RHEL (it's a purely upstream release). Next RHEL release is probably 4.9 (with 4.8 coming out ~late November, and 4.9 early next year).

@andrewgdunn
Copy link

Looks like 4.7.2 hit appstream today and it does indeed resolve this issue! Thanks for the great work.

@luckylinux
Copy link

I disagree. Podman 4.7.2 on Debian Trixie (Testing) still fails with the same error as did Podman 4.3.1 on Debian Bookworm (Stable). See #20886.

@JKHSDTV
Copy link

JKHSDTV commented Jan 10, 2024

I am not sure if this is helpful (but it might be to future Googlers): I got a 'false positive' of sorts where I ran into both of the above errors (unable to upgrade to tcp, received 409 with docker-compose run, Error response from daemon: crun: open executable: Permission denied: OCI permission denied with docker-compose up) and it turned out my actual issue was not the problem discussed above, but merely that my entrypoint was missing a +x bit. Hope it helps someone going down the same rabbit hole.

Romain-Geissler-1A added a commit to Romain-Geissler-1A/podman that referenced this issue Feb 18, 2024
…man.

This effectively fix errors like "unable to upgrade to tcp, received
409" like containers#19930 in the special case where podman itself is running
rootful but inside a container which itself is rootless.

[NO NEW TESTS NEEDED]

Signed-off-by: Romain Geissler <[email protected]>
Romain-Geissler-1A added a commit to Romain-Geissler-1A/podman that referenced this issue Feb 26, 2024
…man.

This effectively fix errors like "unable to upgrade to tcp, received
409" like containers#19930 in the special case where podman itself is running
rootful but inside a container which itself is rootless.

[NO NEW TESTS NEEDED]

Signed-off-by: Romain Geissler <[email protected]>
openshift-cherrypick-robot pushed a commit to openshift-cherrypick-robot/podman that referenced this issue Feb 26, 2024
…man.

This effectively fix errors like "unable to upgrade to tcp, received
409" like containers#19930 in the special case where podman itself is running
rootful but inside a container which itself is rootless.

[NO NEW TESTS NEEDED]

Signed-off-by: Romain Geissler <[email protected]>
TomSweeneyRedHat pushed a commit to TomSweeneyRedHat/podman that referenced this issue Mar 7, 2024
…man.

This effectively fix errors like "unable to upgrade to tcp, received
409" like containers#19930 in the special case where podman itself is running
rootful but inside a container which itself is rootless.

[NO NEW TESTS NEEDED]

Signed-off-by: Romain Geissler <[email protected]>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Apr 12, 2024
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Apr 12, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests