Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rhel78: mountpoint for devices not found #7004

Closed
minsikl opened this issue Jul 16, 2020 · 16 comments
Closed

rhel78: mountpoint for devices not found #7004

minsikl opened this issue Jul 16, 2020 · 16 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@minsikl
Copy link

minsikl commented Jul 16, 2020

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

podman run with a non-root user does not work. The operating system is rhel78. slirp4netns and podman are installed by yum. Also user.max_user_namespaces=28633 is configured on the system.

Steps to reproduce the issue:

  1. run the following command with a non-root user.
podman run --rm -i -t busybox echo hello

Describe the results you received:

Error: container_linux.go:349: starting container process caused "process_linux.go:297: applying cgroup configuration for process caused \"mountpoint for devices not found\"": OCI runtime error

Describe the results you expected:

hello

Additional information you deem important (e.g. issue happens only occasionally):

It works with root user. Also, runc works with a non-root user. The following is a part of debug log of the podman:

time="2020-07-16T02:07:37-05:00" level=debug msg="Received: -1"
time="2020-07-16T02:07:37-05:00" level=debug msg="Cleaning up container 94adf317c678e7500e59cd943089b79636acb63a9e07897b4511262ab9d3342d"
time="2020-07-16T02:07:37-05:00" level=debug msg="Tearing down network namespace at /run/user/2050/netns/cni-cd27ba28-5a12-05fb-ae23-a4e7aaef8fef for container 94adf317c678e7500e59cd943089b79636acb63a9e07897b4511262ab9d3342d"
time="2020-07-16T02:07:37-05:00" level=debug msg="Error unmounting /home/user1/.local/share/containers/storage/overlay/9fbdc45f91a461d4cb1a7e4bbebe93defa88efbacf533ef5c415aee948dae09c/merged with fusermount3 - exec: \"fusermount3\": executable file not found in $PATH"
time="2020-07-16T02:07:37-05:00" level=debug msg="unmounted container \"94adf317c678e7500e59cd943089b79636acb63a9e07897b4511262ab9d3342d\""
time="2020-07-16T02:07:37-05:00" level=debug msg="ExitCode msg: \"time=\\\"2020-07-16t02:07:37-05:00\\\" level=warning msg=\\\"signal: killed\\\"\\ntime=\\\"2020-07-16t02:07:37-05:00\\\" level=error msg=\\\"container_linux.go:349: starting container process caused \\\\\\\"process_linux.go:297: applying cgroup configuration for process caused \\\\\\\\\\\\\\\"mountpoint for devices not found\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\ncontainer_linux.go:349: starting container process caused \\\"process_linux.go:297: applying cgroup configuration for process caused \\\\\\\"mountpoint for devices not found\\\\\\\"\\\": oci runtime error\""
time="2020-07-16T02:07:37-05:00" level=error msg="time=\"2020-07-16T02:07:37-05:00\" level=warning msg=\"signal: killed\"\ntime=\"2020-07-16T02:07:37-05:00\" level=error msg=\"container_linux.go:349: starting container process caused \\\"process_linux.go:297: applying cgroup configuration for process caused \\\\\\\"mountpoint for devices not found\\\\\\\"\\\"\"\ncontainer_linux.go:349: starting container process caused \"process_linux.go:297: applying cgroup configuration for process caused \\\"mountpoint for devices not found\\\"\": OCI runtime error"

Output of podman version:

Version:            1.6.4
RemoteAPI Version:  1
Go Version:         go1.12.12
OS/Arch:            linux/amd64

Output of podman info --debug:

debug:
  compiler: gc
  git commit: ""
  go version: go1.12.12
  podman version: 1.6.4
host:
  BuildahVersion: 1.12.0-dev
  CgroupVersion: v1
  Conmon:
    package: conmon-2.0.15-1.el7_8.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.15, commit: 372b4a12f1c2df4f70c280d41173b60acd3f1260'
  Distribution:
    distribution: '"rhel"'
    version: "7.8"
  IDMappings:
    gidmap:
    - container_id: 0
      host_id: 2050
      size: 1
    - container_id: 1
      host_id: 558752
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 2050
      size: 1
    - container_id: 1
      host_id: 558752
      size: 65536
  MemFree: 16207659008
  MemTotal: 67250630656
  OCIRuntime:
    name: runc
    package: runc-1.0.0-67.rc10.el7_8.x86_64
    path: /usr/bin/runc
    version: 'runc version spec: 1.0.1-dev'
  SwapFree: 7665086464
  SwapTotal: 7665086464
  arch: amd64
  cpus: 8
  eventlogger: file
  hostname: tds-sbc2-el
  kernel: 3.10.0-1127.10.1.el7.x86_64
  os: linux
  rootless: true
  slirp4netns:
    Executable: /usr/bin/slirp4netns
    Package: slirp4netns-0.4.3-4.el7_8.x86_64
    Version: |-
      slirp4netns version 0.4.3
      commit: 2244b9b6461afeccad1678fac3d6e478c28b4ad6
  uptime: 910h 24m 25.53s (Approximately 37.92 days)
registries:
  blocked: null
  insecure: null
  search:
  - registry.access.redhat.com
  - registry.redhat.io
  - docker.io
store:
  ConfigFile: /home/user1/.config/containers/storage.conf
  ContainerStore:
    number: 10
  GraphDriverName: overlay
  GraphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-0.7.2-6.el7_8.x86_64
      Version: |-
        fuse-overlayfs: version 0.7.2
        FUSE library version 3.6.1
        using FUSE kernel interface version 7.29
  GraphRoot: /home/user1/.local/share/containers/storage
  GraphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  ImageStore:
    number: 3
  RunRoot: /run/user/2050/containers
  VolumePath: /home/user1/.local/share/containers/storage/volumes

Package info (e.g. output of rpm -q podman or apt list podman):

podman-1.6.4-18.el7_8.x86_64

Additional environment details (AWS, VirtualBox, physical, etc.):
physical server

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jul 16, 2020
@rhatdan
Copy link
Member

rhatdan commented Jul 17, 2020

$ podman unshare cat /proc/self/uid_map

@rhatdan
Copy link
Member

rhatdan commented Jul 17, 2020

This looks like it is trying to apply cgroups in a rootless environment?

@minsikl
Copy link
Author

minsikl commented Jul 17, 2020

$ podman unshare cat /proc/self/uid_map
         0       2050          1
         1     624288      65536

@rhatdan
Copy link
Member

rhatdan commented Jul 18, 2020

@giuseppe Any thoughts on what is going on, looks like it is trying to use cgroups in rootless containers on RHEL7?

@giuseppe
Copy link
Member

the error is coming from runc, we should not be setting any devices cgroup and I've never seen that error before.

Can you show me the output for grep cgroup /proc/self/mountinfo from the host?

@minsikl
Copy link
Author

minsikl commented Jul 19, 2020

$ grep cgroup /proc/self/mountinfo
25 18 0:21 / /sys/fs/cgroup ro,nosuid,nodev,noexec shared:8 - tmpfs tmpfs ro,mode=755
26 25 0:22 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:9 - cgroup cgroup rw,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd
29 25 0:25 / /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:10 - cgroup cgroup rw,perf_event
30 25 0:26 / /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:11 - cgroup cgroup rw,memory
31 25 0:27 / /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime shared:12 - cgroup cgroup rw,net_prio,net_cls
32 25 0:28 / /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:13 - cgroup cgroup rw,freezer
33 25 0:29 / /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:14 - cgroup cgroup rw,devices
34 25 0:30 / /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:15 - cgroup cgroup rw,blkio
35 25 0:31 / /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:16 - cgroup cgroup rw,pids
36 25 0:32 / /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime shared:17 - cgroup cgroup rw,cpuacct,cpu
37 25 0:33 / /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:18 - cgroup cgroup rw,cpuset
38 25 0:34 / /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:19 - cgroup cgroup rw,hugetlb
45 20 0:33 / /dev/cpuset rw,relatime shared:28 - cgroup cgroup rw,cpuset

@a-trout-in-the-milk
Copy link

a-trout-in-the-milk commented Jul 24, 2020

Additional details on this.

ERROR:

Rootless podman run on RHEL node consistently yields this error:

container_linux.go:349: starting container process caused "process_linux.go:297: applying cgroup configuration for process caused \"mountpoint for devices not found\"": OCI runtime error

DETAILS ON FAILING ENVIRONMENT:

The failing node is a physical HPE server.

Running RHEL 7.8:

HPE MPI 1.5, Build 721r200103T2000.rhel77hpe-200103T2000
NAME="Red Hat Enterprise Linux Server"
VERSION="7.8 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="7.8"
PRETTY_NAME="Red Hat Enterprise Linux"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.8:GA:server"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"

REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.8
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.8"
Red Hat Enterprise Linux Server release 7.8 (Maipo)
Red Hat Enterprise Linux Server release 7.8 (Maipo)
SELinux status:                 disabled

Realtime OS with default 5% CPU time reserved for non-deadline (non-RT) processes. There is one RT process running, an HPE Serviceguard config provider called cmproxyd, which runs in the sg.slice cgroup that HPE Serviceguard creates. On this node this is the only process running in the sg.slice cgroup:

# cat /proc/sys/kernel/sched_rt_period_us
1000000
# cat /proc/sys/kernel/sched_rt_runtime_us
950000
# cat /sys/fs/cgroup/cpu/sg.slice/cpu.rt_runtime_us
950000
# cat /etc/systemd/system/sg.slice
[Unit]
Description=Serviceguard Real Time Slice
Before=slices.target
Requires=sg-realtime-config.service
[Slice]
CPUAccounting=yes
[Install]
WantedBy=slices.target

# cat /etc/systemd/system/sg-realtime-config.service
[Unit]
Description=Serviceguard Real Time Slice configuration service
BindsTo=sg.slice
After=sg.slice
[Service]
Type=oneshot
ExecStart=/bin/sh -c "/usr/sbin/sysctl -n kernel.sched_rt_runtime_us > /sys/fs/cgroup/cpu,cpuacct/sg.slice/cpu.rt_runtime_us"
RemainAfterExit=yes

Docker is installed:

# docker version
Client: Docker Engine - Community
 Version:           19.03.12
 API version:       1.40
 Go version:        go1.13.10
 Git commit:        48a66213fe
 Built:             Mon Jun 22 15:46:54 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.12
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.10
  Git commit:       48a66213fe
  Built:            Mon Jun 22 15:45:28 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.13
  GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

Podman is installed, and container runtime is runc:

# podman version
Version:            1.6.4
RemoteAPI Version:  1
Go Version:         go1.12.12
OS/Arch:            linux/amd64
# podman info
host:
  BuildahVersion: 1.12.0-dev
  CgroupVersion: v1
  Conmon:
    package: conmon-2.0.15-1.el7_8.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.15, commit: 372b4a12f1c2df4f70c280d41173b60acd3f1260'
  Distribution:
    distribution: '"rhel"'
    version: "7.8"
  MemFree: 180565471232
  MemTotal: 202310569984
  OCIRuntime:
    name: runc
    package: containerd.io-1.2.13-3.2.el7.x86_64
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc10
      commit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
      spec: 1.0.1-dev
  SwapFree: 7907307520
  SwapTotal: 7907307520
  arch: amd64
  cpus: 20
  eventlogger: journald
  hostname: REDACTED
  kernel: 3.10.0-1127.8.2.el7.x86_64
  os: linux
  rootless: false
  uptime: 162h 20m 16.65s (Approximately 6.75 days)
registries:
  blocked: null
  insecure: null
  search:
  - registry.access.redhat.com
  - registry.redhat.io
  - docker.io
store:
  ConfigFile: /etc/containers/storage.conf
  ContainerStore:
    number: 0
  GraphDriverName: overlay
  GraphOptions: {}
  GraphRoot: /var/lib/containers/storage
  GraphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  ImageStore:
    number: 0
  RunRoot: /var/run/containers/storage
  VolumePath: /var/lib/containers/storage/volumes

Conmon:

# conmon --version
conmon version 2.0.15
commit: 372b4a12f1c2df4f70c280d41173b60acd3f1260

Only cgroups v1 is available/enabled. All expected cgroups are mounted:

[root@X]# findmnt -R /sys/fs/cgroup
TARGET                            SOURCE FSTYPE OPTIONS
/sys/fs/cgroup                    tmpfs  tmpfs  ro,nosuid,nodev,noexec,mode=755
├─/sys/fs/cgroup/systemd          cgroup cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=syst
├─/sys/fs/cgroup/perf_event       cgroup cgroup rw,nosuid,nodev,noexec,relatime,perf_event
├─/sys/fs/cgroup/memory           cgroup cgroup rw,nosuid,nodev,noexec,relatime,memory
├─/sys/fs/cgroup/net_cls,net_prio cgroup cgroup rw,nosuid,nodev,noexec,relatime,net_prio,net_cls
├─/sys/fs/cgroup/freezer          cgroup cgroup rw,nosuid,nodev,noexec,relatime,freezer
├─/sys/fs/cgroup/devices          cgroup cgroup rw,nosuid,nodev,noexec,relatime,devices
├─/sys/fs/cgroup/blkio            cgroup cgroup rw,nosuid,nodev,noexec,relatime,blkio
├─/sys/fs/cgroup/pids             cgroup cgroup rw,nosuid,nodev,noexec,relatime,pids
├─/sys/fs/cgroup/cpu,cpuacct      cgroup cgroup rw,nosuid,nodev,noexec,relatime,cpuacct,cpu
├─/sys/fs/cgroup/cpuset           cgroup cgroup rw,nosuid,nodev,noexec,relatime,cpuset
└─/sys/fs/cgroup/hugetlb          cgroup cgroup rw,nosuid,nodev,noexec,relatime,hugetlb
[root@X]# cat /proc/1/mounts
rootfs / rootfs rw 0 0
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
devtmpfs /dev devtmpfs rw,nosuid,size=98768300k,nr_inodes=24692075,mode=755 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,nodev,mode=755 0 0
tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0
cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd 0 0
pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
efivarfs /sys/firmware/efi/efivars efivarfs rw,nosuid,nodev,noexec,relatime 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0
cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_prio,net_cls 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpuacct,cpu 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0
cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb 0 0
configfs /sys/kernel/config configfs rw,relatime 0 0
/dev/sda4 / xfs rw,relatime,attr2,inode64,sunit=512,swidth=512,noquota 0 0
systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=22,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=26722 0 0
debugfs /sys/kernel/debug debugfs rw,relatime 0 0
hugetlbfs /dev/hugepages hugetlbfs rw,relatime 0 0
mqueue /dev/mqueue mqueue rw,relatime 0 0
cgroup /dev/cpuset cgroup rw,relatime,cpuset 0 0
/dev/sda3 /home xfs rw,relatime,attr2,inode64,sunit=512,swidth=512,noquota 0 0
/dev/sda2 /boot xfs rw,relatime,attr2,inode64,sunit=512,swidth=512,noquota 0 0
/dev/sdb1 /data xfs rw,relatime,attr2,inode64,sunit=512,swidth=512,noquota 0 0
/dev/sde1 /SBC2 xfs rw,relatime,attr2,inode64,noquota 0 0
/dev/sdd1 /SBC1 xfs rw,relatime,attr2,inode64,noquota 0 0
/dev/sda1 /boot/efi vfat rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=winnt,errors=remount-ro 0 0
sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0
tmpfs /run/user/2050 tmpfs rw,nosuid,nodev,relatime,size=19756892k,mode=700,uid=2050,gid=2050 0 0
tmpfs /run/user/0 tmpfs rw,nosuid,nodev,relatime,size=19756892k,mode=700 0 0
gvfsd-fuse /run/user/0/gvfs fuse.gvfsd-fuse rw,nosuid,nodev,relatime,user_id=0,group_id=0 0 0
tmpfs /run/netns tmpfs rw,nosuid,nodev,mode=755 0 0
binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0
tmpfs /run/user/2062 tmpfs rw,nosuid,nodev,relatime,size=19756892k,mode=700,uid=2062,gid=2062 0 0

libpod.conf attached:
libpod.conf.txt

ROOTED podman run SUCCEEDS ON HPE SERVER:

- Using systemd cgroup driver - processes delegated to machine.slice:

[root@X]# podman --cgroup-manager=systemd --log-level debug run -it fedora bash
DEBU[0000] Reading configuration file "/usr/share/containers/libpod.conf"
DEBU[0000] Merged system config "/usr/share/containers/libpod.conf": &{{false false false false false true} 0 {   [] [] []}  docker://  runc map[crun:[/usr/bin/crun /usr/local/bin/crun] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] [crun runc] [crun] [] [/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] systemd   /var/run/libpod -1 false /etc/cni/net.d/ [/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] podman []   k8s.gcr.io/pause:3.1 /pause false false  2048 shm    false}
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /var/lib/containers/storage
DEBU[0000] Using run root /var/run/containers/storage
DEBU[0000] Using static dir /var/lib/containers/storage/libpod
DEBU[0000] Using tmp dir /var/run/libpod
DEBU[0000] Using volume path /var/lib/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] cached value indicated that overlay is supported
DEBU[0000] cached value indicated that metacopy is not being used
DEBU[0000] cached value indicated that native-diff is usable
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false
DEBU[0000] Initializing event backend journald
DEBU[0000] using runtime "/usr/bin/runc"
WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]docker.io/library/fedora:latest"
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]@a368cbcfa6789bc347345f6d19132afe138b62ff5373d2aa5f37120277c90b54"
DEBU[0000] exporting opaque data as blob "sha256:a368cbcfa6789bc347345f6d19132afe138b62ff5373d2aa5f37120277c90b54"
DEBU[0000] No hostname set; container's hostname will default to runtime default
DEBU[0000] Using bridge netmode
DEBU[0000] created OCI spec and options for new container
DEBU[0000] Allocated lock 4 for container 9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]@a368cbcfa6789bc347345f6d19132afe138b62ff5373d2aa5f37120277c90b54"
DEBU[0000] exporting opaque data as blob "sha256:a368cbcfa6789bc347345f6d19132afe138b62ff5373d2aa5f37120277c90b54"
DEBU[0000] created container "9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4"
DEBU[0000] container "9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4" has work directory "/var/lib/containers/storage/overlay-containers/9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4/userdata"
DEBU[0000] container "9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4" has run directory "/var/run/containers/storage/overlay-containers/9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4/userdata"
DEBU[0000] New container created "9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4"
DEBU[0000] container "9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4" has CgroupParent "machine.slice/libpod-9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4.scope"
DEBU[0000] Handling terminal attach
DEBU[0000] overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/MCOZCTPKMUI46YIKGTTOIQ7O2Z,upperdir=/var/lib/containers/storage/overlay/2afda43ca91f5e316bd3cb040b2d3c21b6342c564b66f233990b248866838095/diff,workdir=/var/lib/containers/storage/overlay/2afda43ca91f5e316bd3cb040b2d3c21b6342c564b66f233990b248866838095/work
DEBU[0000] mounted container "9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4" at "/var/lib/containers/storage/overlay/2afda43ca91f5e316bd3cb040b2d3c21b6342c564b66f233990b248866838095/merged"
DEBU[0000] Created root filesystem for container 9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4 at /var/lib/containers/storage/overlay/2afda43ca91f5e316bd3cb040b2d3c21b6342c564b66f233990b248866838095/merged
DEBU[0000] Made network namespace at /var/run/netns/cni-1dab5267-754e-02e2-0736-a8093e9bc457 for container 9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4
INFO[0000] Got pod network &{Name:lucid_euler Namespace:lucid_euler ID:9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4 NetNS:/var/run/netns/cni-1dab5267-754e-02e2-0736-a8093e9bc457 Networks:[] RuntimeConfig:map[podman:{IP: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}
INFO[0000] About to add CNI network cni-loopback (type=loopback)
INFO[0000] Got pod network &{Name:lucid_euler Namespace:lucid_euler ID:9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4 NetNS:/var/run/netns/cni-1dab5267-754e-02e2-0736-a8093e9bc457 Networks:[] RuntimeConfig:map[podman:{IP: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}
INFO[0000] About to add CNI network podman (type=bridge)
DEBU[0000] [0] CNI result: Interfaces:[{Name:cni-podman0 Mac:f6:3f:6d:ba:e4:84 Sandbox:} {Name:vethb6d30a94 Mac:6e:5d:11:03:26:e0 Sandbox:} {Name:eth0 Mac:c2:68:88:4a:fc:6b Sandbox:/var/run/netns/cni-1dab5267-754e-02e2-0736-a8093e9bc457}], IP:[{Version:4 Interface:0xc0006c6bd8 Address:{IP:10.88.0.13 Mask:ffff0000} Gateway:10.88.0.1}], Routes:[{Dst:{IP:0.0.0.0 Mask:00000000} GW:<nil>}], DNS:{Nameservers:[] Domain: Search:[] Options:[]}
INFO[0000] No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]
INFO[0000] IPv6 enabled; Adding default IPv6 external servers: [nameserver 2001:4860:4860::8888 nameserver 2001:4860:4860::8844]
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode secret
DEBU[0000] Setting CGroups for container 9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4 to machine.slice:libpod:9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d
DEBU[0000] reading hooks from /etc/containers/oci/hooks.d
DEBU[0000] Created OCI spec for container 9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4 at /var/lib/containers/storage/overlay-containers/9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4/userdata/config.json
DEBU[0000] /usr/bin/conmon messages will be logged to syslog
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -s -c 9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4 -u 9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4 -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4/userdata -p /var/run/containers/storage/overlay-containers/9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4/userdata/pidfile -l k8s-file:/var/lib/containers/storage/overlay-containers/9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4/userdata/ctr.log --exit-dir /var/run/libpod/exits --socket-dir-path /var/run/libpod/socket --log-level debug --syslog -t --conmon-pidfile /var/run/containers/storage/overlay-containers/9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /var/run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /var/run/libpod --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4]"
INFO[0000] Running conmon under slice machine.slice and unitName libpod-conmon-9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4.scope
DEBU[0000] Received: 11540
INFO[0000] Got Conmon PID as 11525
DEBU[0000] Created container 9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4 in OCI runtime
DEBU[0000] Attaching to container 9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4
DEBU[0000] connecting to socket /var/run/libpod/socket/9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4/attach
DEBU[0000] Starting container 9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4 with command [bash]
DEBU[0000] Received a resize event: {Width:119 Height:28}
DEBU[0000] Started container 9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4
DEBU[0000] Enabling signal proxying
[root@9232654a751c /]#
[root@X]# cat /proc/11525/cgroup
11:hugetlb:/
10:cpuset:/
9:cpuacct,cpu:/machine.slice/libpod-conmon-9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4.scope
8:pids:/machine.slice/libpod-conmon-9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4.scope
7:blkio:/machine.slice/libpod-conmon-9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4.scope
6:devices:/machine.slice/libpod-conmon-9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4.scope
5:freezer:/
4:net_prio,net_cls:/
3:memory:/machine.slice/libpod-conmon-9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4.scope
2:perf_event:/
1:name=systemd:/machine.slice/libpod-conmon-9232654a751c1ca9a507144f64587b2fc76c7c1fee3519b3f73233023b24c7d4.scope

- Using cgroupfs cgroup driver - processes written to different cgroup:

DEBU[0000] Reading configuration file "/usr/share/containers/libpod.conf"
DEBU[0000] Merged system config "/usr/share/containers/libpod.conf": &{{false false false false false true} 0 {   [] [] []}  docker://  runc map[crun:[/usr/bin/crun /usr/local/bin/crun] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] [crun runc] [crun] [] [/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] systemd   /var/run/libpod -1 false /etc/cni/net.d/ [/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] podman []   k8s.gcr.io/pause:3.1 /pause false false  2048 shm    false}
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /var/lib/containers/storage
DEBU[0000] Using run root /var/run/containers/storage
DEBU[0000] Using static dir /var/lib/containers/storage/libpod
DEBU[0000] Using tmp dir /var/run/libpod
DEBU[0000] Using volume path /var/lib/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] cached value indicated that overlay is supported
DEBU[0000] cached value indicated that metacopy is not being used
DEBU[0000] cached value indicated that native-diff is usable
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false
DEBU[0000] Initializing event backend journald
DEBU[0000] using runtime "/usr/bin/runc"
WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]docker.io/library/fedora:latest"
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]@a368cbcfa6789bc347345f6d19132afe138b62ff5373d2aa5f37120277c90b54"
DEBU[0000] exporting opaque data as blob "sha256:a368cbcfa6789bc347345f6d19132afe138b62ff5373d2aa5f37120277c90b54"
DEBU[0000] No hostname set; container's hostname will default to runtime default
DEBU[0000] Using bridge netmode
DEBU[0000] created OCI spec and options for new container
DEBU[0000] Allocated lock 5 for container 99a4b3e667665e31c89f9bf0ae21e87a30e2f943ed7aed290ad2f8042c53c901
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]@a368cbcfa6789bc347345f6d19132afe138b62ff5373d2aa5f37120277c90b54"
DEBU[0000] exporting opaque data as blob "sha256:a368cbcfa6789bc347345f6d19132afe138b62ff5373d2aa5f37120277c90b54"
DEBU[0000] created container "99a4b3e667665e31c89f9bf0ae21e87a30e2f943ed7aed290ad2f8042c53c901"
DEBU[0000] container "99a4b3e667665e31c89f9bf0ae21e87a30e2f943ed7aed290ad2f8042c53c901" has work directory "/var/lib/containers/storage/overlay-containers/99a4b3e667665e31c89f9bf0ae21e87a30e2f943ed7aed290ad2f8042c53c901/userdata"
DEBU[0000] container "99a4b3e667665e31c89f9bf0ae21e87a30e2f943ed7aed290ad2f8042c53c901" has run directory "/var/run/containers/storage/overlay-containers/99a4b3e667665e31c89f9bf0ae21e87a30e2f943ed7aed290ad2f8042c53c901/userdata"
DEBU[0000] New container created "99a4b3e667665e31c89f9bf0ae21e87a30e2f943ed7aed290ad2f8042c53c901"
DEBU[0000] container "99a4b3e667665e31c89f9bf0ae21e87a30e2f943ed7aed290ad2f8042c53c901" has CgroupParent "/libpod_parent/libpod-99a4b3e667665e31c89f9bf0ae21e87a30e2f943ed7aed290ad2f8042c53c901"
DEBU[0000] Handling terminal attach
DEBU[0000] overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/MCOZCTPKMUI46YIKGTTOIQ7O2Z,upperdir=/var/lib/containers/storage/overlay/3d51c591fc5369ebdcf9b33c43cf77a1ab158081e9458bbc42a9c28902f06273/diff,workdir=/var/lib/containers/storage/overlay/3d51c591fc5369ebdcf9b33c43cf77a1ab158081e9458bbc42a9c28902f06273/work
DEBU[0000] mounted container "99a4b3e667665e31c89f9bf0ae21e87a30e2f943ed7aed290ad2f8042c53c901" at "/var/lib/containers/storage/overlay/3d51c591fc5369ebdcf9b33c43cf77a1ab158081e9458bbc42a9c28902f06273/merged"
DEBU[0000] Created root filesystem for container 99a4b3e667665e31c89f9bf0ae21e87a30e2f943ed7aed290ad2f8042c53c901 at /var/lib/containers/storage/overlay/3d51c591fc5369ebdcf9b33c43cf77a1ab158081e9458bbc42a9c28902f06273/merged
DEBU[0000] Made network namespace at /var/run/netns/cni-32ca7472-eb13-3a7a-e9d8-bf2debec5eac for container 99a4b3e667665e31c89f9bf0ae21e87a30e2f943ed7aed290ad2f8042c53c901
INFO[0000] Got pod network &{Name:epic_payne Namespace:epic_payne ID:99a4b3e667665e31c89f9bf0ae21e87a30e2f943ed7aed290ad2f8042c53c901 NetNS:/var/run/netns/cni-32ca7472-eb13-3a7a-e9d8-bf2debec5eac Networks:[] RuntimeConfig:map[podman:{IP: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}
INFO[0000] About to add CNI network cni-loopback (type=loopback)
INFO[0000] Got pod network &{Name:epic_payne Namespace:epic_payne ID:99a4b3e667665e31c89f9bf0ae21e87a30e2f943ed7aed290ad2f8042c53c901 NetNS:/var/run/netns/cni-32ca7472-eb13-3a7a-e9d8-bf2debec5eac Networks:[] RuntimeConfig:map[podman:{IP: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}
INFO[0000] About to add CNI network podman (type=bridge)
DEBU[0000] [0] CNI result: Interfaces:[{Name:cni-podman0 Mac:f6:3f:6d:ba:e4:84 Sandbox:} {Name:veth0c740f3c Mac:ca:d1:64:30:a5:e9 Sandbox:} {Name:eth0 Mac:76:cd:45:2f:ff:a2 Sandbox:/var/run/netns/cni-32ca7472-eb13-3a7a-e9d8-bf2debec5eac}], IP:[{Version:4 Interface:0xc0003aa568 Address:{IP:REDACTED Mask:ffff0000} Gateway:REDACTED}], Routes:[{Dst:{IP:0.0.0.0 Mask:00000000} GW:<nil>}], DNS:{Nameservers:[] Domain: Search:[] Options:[]}
INFO[0000] No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]
INFO[0000] IPv6 enabled; Adding default IPv6 external servers: [nameserver REDACTED nameserver REDACTED]
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode secret
DEBU[0000] Setting CGroup path for container 99a4b3e667665e31c89f9bf0ae21e87a30e2f943ed7aed290ad2f8042c53c901 to /libpod_parent/libpod-99a4b3e667665e31c89f9bf0ae21e87a30e2f943ed7aed290ad2f8042c53c901
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d
DEBU[0000] reading hooks from /etc/containers/oci/hooks.d
DEBU[0000] Created OCI spec for container 99a4b3e667665e31c89f9bf0ae21e87a30e2f943ed7aed290ad2f8042c53c901 at /var/lib/containers/storage/overlay-containers/99a4b3e667665e31c89f9bf0ae21e87a30e2f943ed7aed290ad2f8042c53c901/userdata/config.json
DEBU[0000] /usr/bin/conmon messages will be logged to syslog
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -c 99a4b3e667665e31c89f9bf0ae21e87a30e2f943ed7aed290ad2f8042c53c901 -u 99a4b3e667665e31c89f9bf0ae21e87a30e2f943ed7aed290ad2f8042c53c901 -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/99a4b3e667665e31c89f9bf0ae21e87a30e2f943ed7aed290ad2f8042c53c901/userdata -p /var/run/containers/storage/overlay-containers/99a4b3e667665e31c89f9bf0ae21e87a30e2f943ed7aed290ad2f8042c53c901/userdata/pidfile -l k8s-file:/var/lib/containers/storage/overlay-containers/99a4b3e667665e31c89f9bf0ae21e87a30e2f943ed7aed290ad2f8042c53c901/userdata/ctr.log --exit-dir /var/run/libpod/exits --socket-dir-path /var/run/libpod/socket --log-level debug --syslog -t --conmon-pidfile /var/run/containers/storage/overlay-containers/99a4b3e667665e31c89f9bf0ae21e87a30e2f943ed7aed290ad2f8042c53c901/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /var/run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /var/run/libpod --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 99a4b3e667665e31c89f9bf0ae21e87a30e2f943ed7aed290ad2f8042c53c901]"
DEBU[0000] Received: 13755
INFO[0000] Got Conmon PID as 13744
DEBU[0000] Created container 99a4b3e667665e31c89f9bf0ae21e87a30e2f943ed7aed290ad2f8042c53c901 in OCI runtime
DEBU[0000] Attaching to container 99a4b3e667665e31c89f9bf0ae21e87a30e2f943ed7aed290ad2f8042c53c901
DEBU[0000] connecting to socket /var/run/libpod/socket/99a4b3e667665e31c89f9bf0ae21e87a30e2f943ed7aed290ad2f8042c53c901/attach
DEBU[0000] Starting container 99a4b3e667665e31c89f9bf0ae21e87a30e2f943ed7aed290ad2f8042c53c901 with command [bash]
DEBU[0000] Received a resize event: {Width:119 Height:28}
DEBU[0000] Started container 99a4b3e667665e31c89f9bf0ae21e87a30e2f943ed7aed290ad2f8042c53c901
DEBU[0000] Enabling signal proxying
[root@99a4b3e66766 /]#
[root@X]# cat /proc/13744/cgroup
11:hugetlb:/libpod_parent/conmon
10:cpuset:/libpod_parent/conmon
9:cpuacct,cpu:/libpod_parent/conmon
8:pids:/libpod_parent/conmon
7:blkio:/libpod_parent/conmon
6:devices:/libpod_parent/conmon
5:freezer:/libpod_parent/conmon
4:net_prio,net_cls:/libpod_parent/conmon
3:memory:/libpod_parent/conmon
2:perf_event:/libpod_parent/conmon
1:name=systemd:/libpod_parent/conmon

ROOTLESS podman run CONSISTENTLY FAILS ON HPE SERVER, BUT CONSISTENTLY SUCCEEDS IN LOCAL REPRO ATTEMPTS

The local repro environment is running the same OS and same versions of all programmes and packages listed in the HPE details section above. Two key differences - (1) it's not an HP server, so no Serviceguard, and (2) no RT. When running rootless podman uses cgroupfs despite libpod.conf specifying the systemd driver - this is what I expect, v1 hierarchy so no unprivileged systemd delegation of cgroups.

Success in local repro attempt:

[jr@rhel7 ~]$ strace -o rootless-repro-strace.txt podman run --log-level=debug -it fedora bash
WARN[0000] The cgroups manager is set to systemd but there is no systemd user session available
WARN[0000] For using systemd, you may need to login using an user session
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1003` (possibly as root)
WARN[0000] Falling back to --cgroup-manager=cgroupfs and --events-backend=file
DEBU[0000] Using conmon: "/usr/local/libexec/podman/conmon"
DEBU[0000] Initializing boltdb state at /home/jr/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /home/jr/.local/share/containers/storage
DEBU[0000] Using run root /tmp/run-1003
DEBU[0000] Using static dir /home/jr.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /tmp/run-1003/libpod/tmp
DEBU[0000] Using volume path /home/jr/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] Not configuring container store
DEBU[0000] Initializing event backend file
DEBU[0000] using runtime "/usr/bin/runc"
WARN[0000] The cgroups manager is set to systemd but there is no systemd user session available
WARN[0000] For using systemd, you may need to login using an user session
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1003` (possibly as root)
WARN[0000] Falling back to --cgroup-manager=cgroupfs and --events-backend=file
DEBU[0000] Using conmon: "/usr/local/libexec/podman/conmon"
DEBU[0000] Initializing boltdb state at /home/jr/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /home/jr.local/share/containers/storage
DEBU[0000] Using run root /tmp/run-1003
DEBU[0000] Using static dir /home/jr.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /tmp/run-1003/libpod/tmp
DEBU[0000] Using volume path /home/jr/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] overlay: mount_program=/bin/fuse-overlayfs
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false
DEBU[0000] Initializing event backend file
DEBU[0000] using runtime "/usr/bin/runc"
DEBU[0000] Failed to add podman to systemd sandbox cgroup: exec: "dbus-launch": executable file not found in $PATH
INFO[0000] running as rootless
WARN[0000] The cgroups manager is set to systemd but there is no systemd user session available
WARN[0000] For using systemd, you may need to login using an user session
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1003` (possibly as root)
WARN[0000] Falling back to --cgroup-manager=cgroupfs and --events-backend=file
DEBU[0000] Using conmon: "/usr/local/libexec/podman/conmon"
DEBU[0000] Initializing boltdb state at /home/jr/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /home/jr/.local/share/containers/storage
DEBU[0000] Using run root /tmp/run-1003
DEBU[0000] Using static dir /home/jr.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /tmp/run-1003/libpod/tmp
DEBU[0000] Using volume path /home/jr/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] Initializing event backend file
DEBU[0000] using runtime "/usr/bin/runc"
DEBU[0000] parsed reference into "[overlay@/home/jr/.local/share/containers/storage+/tmp/run-1003:overlay.mount_program=/bin/fuse-overlayfs]docker.io/library/fedora:latest"
DEBU[0000] parsed reference into "[overlay@/home/jr/.local/share/containers/storage+/tmp/run-1003:overlay.mount_program=/bin/fuse-overlayfs]@a368cbcfa6789bc347345f6d19132afe138b62ff5373d2aa5f37120277c90b54"
DEBU[0000] exporting opaque data as blob "sha256:a368cbcfa6789bc347345f6d19132afe138b62ff5373d2aa5f37120277c90b54"
DEBU[0000] No hostname set; container's hostname will default to runtime default
DEBU[0000] Using slirp4netns netmode
DEBU[0000] created OCI spec and options for new container
DEBU[0000] Allocated lock 17 for container c3be953da46e015e93a9de37c547a7587a32d609fad3ca721135937833d94a1e
DEBU[0000] parsed reference into "[overlay@/home/jr/.local/share/containers/storage+/tmp/run-1003:overlay.mount_program=/bin/fuse-overlayfs]@a368cbcfa6789bc347345f6d19132afe138b62ff5373d2aa5f37120277c90b54"
DEBU[0000] exporting opaque data as blob "sha256:a368cbcfa6789bc347345f6d19132afe138b62ff5373d2aa5f37120277c90b54"
DEBU[0000] created container "c3be953da46e015e93a9de37c547a7587a32d609fad3ca721135937833d94a1e"
DEBU[0000] container "c3be953da46e015e93a9de37c547a7587a32d609fad3ca721135937833d94a1e" has work directory "/home/jr/.local/share/containers/storage/overlay-containers/c3be953da46e015e93a9de37c547a7587a32d609fad3ca721135937833d94a1e/userdata"
DEBU[0000] container "c3be953da46e015e93a9de37c547a7587a32d609fad3ca721135937833d94a1e" has run directory "/tmp/run-1003/overlay-containers/c3be953da46e015e93a9de37c547a7587a32d609fad3ca721135937833d94a1e/userdata"
DEBU[0000] New container created "c3be953da46e015e93a9de37c547a7587a32d609fad3ca721135937833d94a1e"
DEBU[0000] container "c3be953da46e015e93a9de37c547a7587a32d609fad3ca721135937833d94a1e" has CgroupParent "/libpod_parent/libpod-c3be953da46e015e93a9de37c547a7587a32d609fad3ca721135937833d94a1e"
DEBU[0000] Handling terminal attach
DEBU[0000] overlay: mount_data=lowerdir=/home/jr/.local/share/containers/storage/overlay/l/5HQK3GLTAG4Z4OEMJ2CWZJV3SX,upperdir=/home/jr/.local/share/containers/storage/overlay/b5f5feeba87a561df1132c0f9edd08edb4b0de800797a2cfb7bd8ea0b09fd8b3/diff,workdir=/home/jr/.local/share/containers/storage/overlay/b5f5feeba87a561df1132c0f9edd08edb4b0de800797a2cfb7bd8ea0b09fd8b3/work
DEBU[0000] Made network namespace at /tmp/run-1003/netns/cni-54e16dd1-72f0-8b4f-47db-313e6a703165 for container c3be953da46e015e93a9de37c547a7587a32d609fad3ca721135937833d94a1e
DEBU[0000] slirp4netns command: /bin/slirp4netns --disable-host-loopback --mtu 65520 --enable-sandbox -c -e 3 -r 4 --netns-type=path /tmp/run-1003/netns/cni-54e16dd1-72f0-8b4f-47db-313e6a703165 tap0
DEBU[0000] mounted container "c3be953da46e015e93a9de37c547a7587a32d609fad3ca721135937833d94a1e" at "/home/jr/.local/share/containers/storage/overlay/b5f5feeba87a561df1132c0f9edd08edb4b0de800797a2cfb7bd8ea0b09fd8b3/merged"
DEBU[0000] Created root filesystem for container c3be953da46e015e93a9de37c547a7587a32d609fad3ca721135937833d94a1e at /home/jr/.local/share/containers/storage/overlay/b5f5feeba87a561df1132c0f9edd08edb4b0de800797a2cfb7bd8ea0b09fd8b3/merged
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode secret
DEBU[0000] Created OCI spec for container c3be953da46e015e93a9de37c547a7587a32d609fad3ca721135937833d94a1e at /home/jr/.local/share/containers/storage/overlay-containers/c3be953da46e015e93a9de37c547a7587a32d609fad3ca721135937833d94a1e/userdata/config.json
DEBU[0000] /usr/local/libexec/podman/conmon messages will be logged to syslog
DEBU[0000] running conmon: /usr/local/libexec/podman/conmon  args="[--api-version 1 -c c3be953da46e015e93a9de37c547a7587a32d609fad3ca721135937833d94a1e -u c3be953da46e015e93a9de37c547a7587a32d609fad3ca721135937833d94a1e -r /usr/bin/runc -b /home/jr/.local/share/containers/storage/overlay-containers/c3be953da46e015e93a9de37c547a7587a32d609fad3ca721135937833d94a1e/userdata -p /tmp/run-1003/overlay-containers/c3be953da46e015e93a9de37c547a7587a32d609fad3ca721135937833d94a1e/userdata/pidfile -l k8s-file:/home/jr/.local/share/containers/storage/overlay-containers/c3be953da46e015e93a9de37c547a7587a32d609fad3ca721135937833d94a1e/userdata/ctr.log --exit-dir /tmp/run-1003/libpod/tmp/exits --socket-dir-path /tmp/run-1003/libpod/tmp/socket --log-level debug --syslog -t --conmon-pidfile /tmp/run-1003/overlay-containers/c3be953da46e015e93a9de37c547a7587a32d609fad3ca721135937833d94a1e/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/jr/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /tmp/run-1003 --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /tmp/run-1003/libpod/tmp --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mount_program=/bin/fuse-overlayfs --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg container --exit-command-arg cleanup --exit-command-arg c3be953da46e015e93a9de37c547a7587a32d609fad3ca721135937833d94a1e]"
WARN[0000] Failed to add conmon to cgroupfs sandbox cgroup: error creating cgroup for pids: mkdir /sys/fs/cgroup/pids/libpod_parent: permission denied
DEBU[0000] Received: 3455
INFO[0000] Got Conmon PID as 3444
DEBU[0000] Created container c3be953da46e015e93a9de37c547a7587a32d609fad3ca721135937833d94a1e in OCI runtime
DEBU[0000] Attaching to container c3be953da46e015e93a9de37c547a7587a32d609fad3ca721135937833d94a1e
DEBU[0000] connecting to socket /tmp/run-1003/libpod/tmp/socket/c3be953da46e015e93a9de37c547a7587a32d609fad3ca721135937833d94a1e/attach
DEBU[0000] Starting container c3be953da46e015e93a9de37c547a7587a32d609fad3ca721135937833d94a1e with command [bash]
DEBU[0000] Received a resize event: {Width:139 Height:33}
DEBU[0000] Started container c3be953da46e015e93a9de37c547a7587a32d609fad3ca721135937833d94a1e
DEBU[0000] Enabling signal proxying
[root@c3be953da46e /]#

The conmon process (/usr/local/libexec/podman/conmon --api-version [...]) runs as unprivileged user jr and has pid 3444.

[root@rhel7 ~]# cat /proc/3444/cgroup
11:pids:/user.slice
10:hugetlb:/
9:perf_event:/
8:devices:/user.slice
7:cpuset:/
6:memory:/user.slice
5:freezer:/
4:blkio:/user.slice
3:cpuacct,cpu:/user.slice
2:net_prio,net_cls:/
1:name=systemd:/user.slice/user-1000.slice/session-158.scope

A question about this - I ran this command as user jr w/ uid/gid 1003; id 1000 is vagrant. Why does the systemd hierarchy bookeep my process as a different - totally unrelated - user?

Failure on HP box:

[jr@X ~]$ strace -o rootless-hp-strace.txt podman run --log-level=debug -it fedora bash
WARN[0000] The cgroups manager is set to systemd but there is no systemd user session available
WARN[0000] For using systemd, you may need to login using an user session
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 2063` (possibly as root)
WARN[0000] Falling back to --cgroup-manager=cgroupfs and --events-backend=file
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /home/jr/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /home/jr/.local/share/containers/storage
DEBU[0000] Using run root /tmp/run-2063/containers
DEBU[0000] Using static dir /home/jr/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /tmp/run-2063/libpod/tmp
DEBU[0000] Using volume path /home/jr/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] Not configuring container store
DEBU[0000] Initializing event backend file
DEBU[0000] using runtime "/usr/bin/runc"
WARN[0000] The cgroups manager is set to systemd but there is no systemd user session available
WARN[0000] For using systemd, you may need to login using an user session
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 2063` (possibly as root)
WARN[0000] Falling back to --cgroup-manager=cgroupfs and --events-backend=file
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /home/jr/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /home/jr/.local/share/containers/storage
DEBU[0000] Using run root /tmp/run-2063/containers
DEBU[0000] Using static dir /home/jr.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /tmp/run-2063/libpod/tmp
DEBU[0000] Using volume path /home/jr/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] overlay: mount_program=/bin/fuse-overlayfs
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false
DEBU[0000] Initializing event backend file
DEBU[0000] using runtime "/usr/bin/runc"
DEBU[0000] Failed to add podman to systemd sandbox cgroup: dial unix /run/user/0/bus: connect: permission denied
INFO[0000] running as rootless
WARN[0000] The cgroups manager is set to systemd but there is no systemd user session available
WARN[0000] For using systemd, you may need to login using an user session
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 2063` (possibly as root)
WARN[0000] Falling back to --cgroup-manager=cgroupfs and --events-backend=file
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /home/jr/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /home/jr/.local/share/containers/storage
DEBU[0000] Using run root /tmp/run-2063/containers
DEBU[0000] Using static dir /home/jr/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /tmp/run-2063/libpod/tmp
DEBU[0000] Using volume path /home/jr/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] Initializing event backend file
DEBU[0000] using runtime "/usr/bin/runc"
DEBU[0000] parsed reference into "[overlay@/home/jr/.local/share/containers/storage+/tmp/run-2063/containers:overlay.mount_program=/bin/fuse-overlayfs]docker.io/library/fedora:latest"
DEBU[0000] parsed reference into "[overlay@/home/jr/.local/share/containers/storage+/tmp/run-2063/containers:overlay.mount_program=/bin/fuse-overlayfs]@a368cbcfa6789bc347345f6d19132afe138b62ff5373d2aa5f37120277c90b54"
DEBU[0000] exporting opaque data as blob "sha256:a368cbcfa6789bc347345f6d19132afe138b62ff5373d2aa5f37120277c90b54"
DEBU[0000] No hostname set; container's hostname will default to runtime default
DEBU[0000] Using slirp4netns netmode
DEBU[0000] created OCI spec and options for new container
DEBU[0000] Allocated lock 20 for container bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980
DEBU[0000] parsed reference into "[overlay@/home/jr/.local/share/containers/storage+/tmp/run-2063/containers:overlay.mount_program=/bin/fuse-overlayfs]@a368cbcfa6789bc347345f6d19132afe138b62ff5373d2aa5f37120277c90b54"
DEBU[0000] exporting opaque data as blob "sha256:a368cbcfa6789bc347345f6d19132afe138b62ff5373d2aa5f37120277c90b54"
DEBU[0000] created container "bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980"
DEBU[0000] container "bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980" has work directory "/home/jr/.local/share/containers/storage/overlay-containers/bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980/userdata"
DEBU[0000] container "bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980" has run directory "/tmp/run-2063/containers/overlay-containers/bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980/userdata"
DEBU[0000] New container created "bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980"
DEBU[0000] container "bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980" has CgroupParent "/libpod_parent/libpod-bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980"
DEBU[0000] Handling terminal attach
DEBU[0000] overlay: mount_data=lowerdir=/home/jr/.local/share/containers/storage/overlay/l/PQAXVJJEQRYHUB42ILADK5RH23,upperdir=/home/jr/.local/share/containers/storage/overlay/ccd4afb295c2675856f5dc0527b61d1c02dfa5b4ddeb837c34095ef8ae204058/diff,workdir=/home/jr/.local/share/containers/storage/overlay/ccd4afb295c2675856f5dc0527b61d1c02dfa5b4ddeb837c34095ef8ae204058/work
DEBU[0000] Made network namespace at /tmp/run-2063/netns/cni-d1cfb87d-64ce-6dfb-7206-8a4a621693f4 for container bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980
DEBU[0000] mounted container "bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980" at "/home/jr/.local/share/containers/storage/overlay/ccd4afb295c2675856f5dc0527b61d1c02dfa5b4ddeb837c34095ef8ae204058/merged"
DEBU[0000] slirp4netns command: /bin/slirp4netns --disable-host-loopback --mtu 65520 --enable-sandbox -c -e 3 -r 4 --netns-type=path /tmp/run-2063/netns/cni-d1cfb87d-64ce-6dfb-7206-8a4a621693f4 tap0
DEBU[0000] Created root filesystem for container bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980 at /home/jr/.local/share/containers/storage/overlay/ccd4afb295c2675856f5dc0527b61d1c02dfa5b4ddeb837c34095ef8ae204058/merged
INFO[0000] No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]
INFO[0000] IPv6 enabled; Adding default IPv6 external servers: [nameserver 2001:4860:4860::8888 nameserver 2001:4860:4860::8844]
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode secret
DEBU[0000] Created OCI spec for container bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980 at /home/jr/.local/share/containers/storage/overlay-containers/bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980/userdata/config.json
DEBU[0000] /usr/bin/conmon messages will be logged to syslog
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -c bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980 -u bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980 -r /usr/bin/runc -b /home/jr/.local/share/containers/storage/overlay-containers/bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980/userdata -p /tmp/run-2063/containers/overlay-containers/bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980/userdata/pidfile -l k8s-file:/home/jr/.local/share/containers/storage/overlay-containers/bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980/userdata/ctr.log --exit-dir /tmp/run-2063/libpod/tmp/exits --socket-dir-path /tmp/run-2063/libpod/tmp/socket --log-level debug --syslog -t --conmon-pidfile /tmp/run-2063/containers/overlay-containers/bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/jr/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /tmp/run-2063/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /tmp/run-2063/libpod/tmp --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mount_program=/bin/fuse-overlayfs --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg container --exit-command-arg cleanup --exit-command-arg bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980]"
WARN[0000] Failed to add conmon to cgroupfs sandbox cgroup: write /sys/fs/cgroup/memory/libpod_parent/conmon/tasks: open /sys/fs/cgroup/memory/libpod_parent/conmon/tasks: permission denied
DEBU[0000] Received: -1
DEBU[0000] Cleaning up container bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980
DEBU[0000] Tearing down network namespace at /tmp/run-2063/netns/cni-d1cfb87d-64ce-6dfb-7206-8a4a621693f4 for container bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980
DEBU[0000] Error unmounting /home/jr/.local/share/containers/storage/overlay/ccd4afb295c2675856f5dc0527b61d1c02dfa5b4ddeb837c34095ef8ae204058/merged with fusermount3 - exec: "fusermount3": executable file not found in $PATH
DEBU[0000] unmounted container "bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980"
DEBU[0000] ExitCode msg: "time=\"2020-07-24t11:17:19-05:00\" level=warning msg=\"signal: killed\"\ntime=\"2020-07-24t11:17:19-05:00\" level=error msg=\"container_linux.go:349: starting container process caused \\\"process_linux.go:297: applying cgroup configuration for process caused \\\\\\\"mountpoint for devices not found\\\\\\\"\\\"\"\ncontainer_linux.go:349: starting container process caused \"process_linux.go:297: applying cgroup configuration for process caused \\\"mountpoint for devices not found\\\"\": oci runtime error"
ERRO[0000] time="2020-07-24T11:17:19-05:00" level=warning msg="signal: killed"
time="2020-07-24T11:17:19-05:00" level=error msg="container_linux.go:349: starting container process caused \"process_linux.go:297: applying cgroup configuration for process caused \\\"mountpoint for devices not found\\\"\""
container_linux.go:349: starting container process caused "process_linux.go:297: applying cgroup configuration for process caused \"mountpoint for devices not found\"": OCI runtime error
[jr@X ~]$ podman ps -a
CONTAINER ID  IMAGE                             COMMAND    CREATED         STATUS   PORTS  NAMES
bb2e2be096e0  docker.io/library/fedora:latest   bash       47 minutes ago  Created         agitated_galois
[root@X]# cat /var/log/messages
Jul 24 11:17:19 X conmon: conmon bb2e2be096e01b9bd681 <ndebug>: failed to write to /proc/self/oom_score_adj: Permission denied
Jul 24 11:17:19 X conmon: conmon bb2e2be096e01b9bd681 <ninfo>: addr{sun_family=AF_UNIX, sun_path=/tmp/conmon-term.FZGGO0}
Jul 24 11:17:19 X conmon: conmon bb2e2be096e01b9bd681 <ninfo>: attach sock path: /tmp/run-2063/libpod/tmp/socket/bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980/attach
Jul 24 11:17:19 X conmon: conmon bb2e2be096e01b9bd681 <ninfo>: addr{sun_family=AF_UNIX, sun_path=/tmp/run-2063/libpod/tmp/socket/bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980/attach}
Jul 24 11:17:19 X conmon: conmon bb2e2be096e01b9bd681 <ninfo>: terminal_ctrl_fd: 14
Jul 24 11:17:19 X conmon: conmon bb2e2be096e01b9bd681 <ninfo>: winsz read side: 16, winsz write side: 16
Jul 24 11:17:19 X conmon: conmon bb2e2be096e01b9bd681 <nwarn>: Failed to chown stdin
Jul 24 11:17:19 X conmon: conmon bb2e2be096e01b9bd681 <nwarn>: Failed to chown stdout
Jul 24 11:17:19 X conmon: conmon bb2e2be096e01b9bd681 <ninfo>: about to accept from console_socket_fd: 10
Jul 24 11:17:19 X conmon: conmon bb2e2be096e01b9bd681 <ninfo>: about to recvfd from connfd: 12
Jul 24 11:17:19 X conmon: conmon bb2e2be096e01b9bd681 <ninfo>: console = {.name = '(null)'; .fd = 0}
Jul 24 11:17:19 X conmon: conmon bb2e2be096e01b9bd681 <nwarn>: Failed to get console terminal settings
Jul 24 11:17:19 X conmon: conmon bb2e2be096e01b9bd681 <error>: Failed to create container: exit status 1

Strace output for rootless HPE run and rootless local repro run attached:
rootless-repro-strace.txt
rootless-hp-strace.txt

The container log file is empty, which we expect since the container never started. Config.json is attached:
config-json.txt

jr@X ~]$ cd /home/jr/.local/share/containers/storage/overlay-containers/bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980/userdata
[jr@X userdata]$ ls -la
total 32
drwx------ 4 jr jr     126 Jul 24 12:12 .
drwx------ 3 jr jr     22 Jul 24 11:17 ..
drwxr-xr-x 2 jr jr     6 Jul 24 11:17 artifacts
srwx------ 1 jr jr     0 Jul 24 12:12 attach
-rw-r--r-- 1 jr jr     20965 Jul 24 11:17 config.json
prw-r--r-- 1 jr jr     0 Jul 24 11:17 ctl
-rw------- 1 jr jr     0 Jul 24 11:17 ctr.log
drwx------ 2 jr jr     6 Jul 24 11:17 shm
prw-r--r-- 1 jr jr     0 Jul 24 11:17 winsz

DEBUGGING

This is all still on the HPE server - carrying on from the previous section. If I attempt to rerun the conmon process manually - ie.

  1. buildah unshare
  2. buildah mount bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980
  3. Remove the ctl and winsz files and reexecute
    This exits with 0 and /var/log/messages is again unhelpful:
[jr@X userdata]$ /usr/bin/conmon --api-version 1 -c bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980 -u bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980 -r /usr/bin/runc -b /home/jr/.local/share/containers/storage/overlay-containers/bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980/userdata -p /tmp/run-2063/containers/overlay-containers/bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980/userdata/pidfile -l k8s-file:/home/jr/.local/share/containers/storage/overlay-containers/bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980/userdata/ctr.log --exit-dir /tmp/run-2063/libpod/tmp/exits --socket-dir-path /tmp/run-2063/libpod/tmp/socket --syslog -t --conmon-pidfile /tmp/run-2063/containers/overlay-containers/bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/jr/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /tmp/run-2063/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /tmp/run-2063/libpod/tmp --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mount_program=/bin/fuse-overlayfs --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg container --exit-command-arg cleanup --exit-command-arg bb2e2be096e01b9bd6811ea4efc8c6b4744074d0c8dd4ab5dd183da061178980

Jul 24 12:19:35 X conmon: conmon bb2e2be096e01b9bd681 <nwarn>: Failed to chown stdin
Jul 24 12:19:35 X conmon: conmon bb2e2be096e01b9bd681 <nwarn>: Failed to chown stdout
Jul 24 12:19:35 X conmon: conmon bb2e2be096e01b9bd681 <error>: Failed to create container: exit status 1

The initial error message made me think that there was an issue with using runc rootless on this HPE node. However, creating and starting a container as an unprivileged user using runc alone following the steps in https://github.com/opencontainers/runc, outside the context of podman/crio, works just fine:

[jr@X test-runc-container]$ runc --debug --root /tmp/runc run testrootlessrunccontainer
DEBU[0000] nsexec:601 nsexec started
DEBU[0000] child process in init()
DEBU[0000] logging has already been configured
DEBU[0000] log pipe has been closed: EOF
/ #
[root@X]# cat /proc/29407/cgroup
11:hugetlb:/
10:cpuset:/
9:cpuacct,cpu:/user.slice
8:pids:/user.slice
7:blkio:/user.slice
6:devices:/user.slice
5:freezer:/
4:net_prio,net_cls:/
3:memory:/user.slice
2:perf_event:/
1:name=systemd:/user.slice/user-2050.slice/session-396960.scope
(Again the question of why the wrong user slice is used...)

So the crio log is obviously telling us that the instance of conmon that podman created to monitor this new container isn't able to actually act on the request to go start that container with runc.

# Failing HPE podman debug output

DEBU[0000] running conmon: /usr/bin/conmon               args=“[…]”
[…]
DEBU[0000] Received: -1


# Succeeding repro env podman debug output

DEBU[0000] running conmon: /usr/local/libexec/podman/conmon  args=“[…]”
[…]
DEBU[0000] Received: 3455				# This is the pid of the container bash process
INFO[0000] Got Conmon PID as 3444		# conmon’s parent is pid1

We know conmon double forks to detach from init then launch a new child OCI runtime. In this case the podman run process forked off a child with pid 17006 - I assume this is conmon. Conmon then tried to launch the container process, its child, as user 2063, but failed with exit status 127. This system exit code does usually mean "path isn't there", which certainly matches the debug complaint of not finding the devices mount point, but all cgroups were and are healthily mounted, plain ol' runc can get to 'em just fine.

open("/proc/16991/cmdline", O_RDONLY)   = 7\
read(7, "podman\\0run\\0--log-level=debug\\0-it"..., 512) = 45\
read(7, "", 467)                        = 0\
close(7)                                = 0\
clone(child_stack=NULL, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f4d99453a10) = 17006\
wait4(17006, [\{WIFEXITED(s) && WEXITSTATUS(s) == 127\}], 0, NULL) = 17006\
--- SIGCHLD \{si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=17006, si_uid=2063, si_status=127, si_utime=5, si_stime=4\} ---\
rt_sigreturn(\{mask=[]\})                 = 17006\
futex(0x5611a9064350, FUTEX_WAKE_PRIVATE, 1) = 1\
futex(0x5611a9064250, FUTEX_WAKE_PRIVATE, 1) = 0\
close(6)                                = 0\
close(5)                                = 0\
exit_group(127)                         = ?\
+++ exited with 127 +++\

Thoughts or direction appreciated!

@giuseppe
Copy link
Member

the systemd driver is not supported for rootless on cgroup v1. You should use cgroupfs.

Have you logged in using the vagrant user?

Please show me what is the output for these commands:

$ podman unshare env
$ podman unshare cat /proc/self/mountinfo
$ podman unshare strace -v -s 4096 -f -o rootless-repro-strace.txt podman run --log-level=debug -it fedora bash  (Please notice the -f, otherwise we don't  trace the runc process)

does it work if you force cgroupfs (podman --cgroup-manager cgroupfs run -it fedora bash)?

@a-trout-in-the-milk
Copy link

a-trout-in-the-milk commented Jul 25, 2020

@giuseppe -

Yes, cgroupfs is being used in rootless scenario, as it should be. Debug log:
WARN[0000] Falling back to --cgroup-manager=cgroupfs and --events-backend=file
Also, I explicitly verified yesterday (but see that I did not post above - apologies) that the below command fails with the same mountpoint error if run as non-root:
[jr@X ~]# podman --cgroup-manager=cgroupfs --log-level debug run -it fedora bash

The failing HPE environment is RHEL on a physical server - no virtualization. My local repro environment where rootless podman run succeds is a vagrant VM on virtualbox, but I am not logged in as vagrant. This is the login sequence I take on my local VM (edited to fix sequence):

  1. Login as vagrant with password
  2. sudo su to root
  3. su to jr user with password. All rootless podman testing done as jr user, which does not have sudo.

I honestly don't know if the question of why systemd is tracking the process in the wrong user slice is relevant to this mountpoint not found error. Given that this name cgroup has no actual controllers, it may be a red herring.

On the HP box where rootless podman run fails

  1. podman unshare env
[jr@X ~]$ podman unshare env
XDG_SESSION_ID=471139
HOSTNAME=X
SHELL=/bin/bash
TERM=xterm-256color
HISTSIZE=1000
QTDIR=/usr/lib64/qt-3.3
QT_GRAPHICSSYSTEM_CHECKED=1
USER=jr
LS_COLORS=rs=0:di=38;5;27:ln=38;5;51:mh=44;38;5;15:pi=40;38;5;11:so=38;5;13:do=38;5;5:bd=48;5;232;38;5;11:cd=48;5;232;38;5;3:or=48;5;232;38;5;9:mi=05;48;5;232;38;5;15:su=48;5;196;38;5;15:sg=48;5;11;38;5;16:ca=48;5;196;38;5;226:tw=48;5;10;38;5;16:ow=48;5;10;38;5;21:st=48;5;21;38;5;15:ex=38;5;34:*.tar=38;5;9:*.tgz=38;5;9:*.arc=38;5;9:*.arj=38;5;9:*.taz=38;5;9:*.lha=38;5;9:*.lz4=38;5;9:*.lzh=38;5;9:*.lzma=38;5;9:*.tlz=38;5;9:*.txz=38;5;9:*.tzo=38;5;9:*.t7z=38;5;9:*.zip=38;5;9:*.z=38;5;9:*.Z=38;5;9:*.dz=38;5;9:*.gz=38;5;9:*.lrz=38;5;9:*.lz=38;5;9:*.lzo=38;5;9:*.xz=38;5;9:*.bz2=38;5;9:*.bz=38;5;9:*.tbz=38;5;9:*.tbz2=38;5;9:*.tz=38;5;9:*.deb=38;5;9:*.rpm=38;5;9:*.jar=38;5;9:*.war=38;5;9:*.ear=38;5;9:*.sar=38;5;9:*.rar=38;5;9:*.alz=38;5;9:*.ace=38;5;9:*.zoo=38;5;9:*.cpio=38;5;9:*.7z=38;5;9:*.rz=38;5;9:*.cab=38;5;9:*.jpg=38;5;13:*.jpeg=38;5;13:*.gif=38;5;13:*.bmp=38;5;13:*.pbm=38;5;13:*.pgm=38;5;13:*.ppm=38;5;13:*.tga=38;5;13:*.xbm=38;5;13:*.xpm=38;5;13:*.tif=38;5;13:*.tiff=38;5;13:*.png=38;5;13:*.svg=38;5;13:*.svgz=38;5;13:*.mng=38;5;13:*.pcx=38;5;13:*.mov=38;5;13:*.mpg=38;5;13:*.mpeg=38;5;13:*.m2v=38;5;13:*.mkv=38;5;13:*.webm=38;5;13:*.ogm=38;5;13:*.mp4=38;5;13:*.m4v=38;5;13:*.mp4v=38;5;13:*.vob=38;5;13:*.qt=38;5;13:*.nuv=38;5;13:*.wmv=38;5;13:*.asf=38;5;13:*.rm=38;5;13:*.rmvb=38;5;13:*.flc=38;5;13:*.avi=38;5;13:*.fli=38;5;13:*.flv=38;5;13:*.gl=38;5;13:*.dl=38;5;13:*.xcf=38;5;13:*.xwd=38;5;13:*.yuv=38;5;13:*.cgm=38;5;13:*.emf=38;5;13:*.axv=38;5;13:*.anx=38;5;13:*.ogv=38;5;13:*.ogx=38;5;13:*.aac=38;5;45:*.au=38;5;45:*.flac=38;5;45:*.mid=38;5;45:*.midi=38;5;45:*.mka=38;5;45:*.mp3=38;5;45:*.mpc=38;5;45:*.ogg=38;5;45:*.ra=38;5;45:*.wav=38;5;45:*.axa=38;5;45:*.oga=38;5;45:*.spx=38;5;45:*.xspf=38;5;45:
SUDO_USER=bluetechlab
SUDO_UID=2050
USERNAME=root
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/cmcluster/bin
MAIL=/var/spool/mail/bluetechlab
PWD=/home/jr
LANG=en_US.UTF-8
MODULEPATH=/usr/share/Modules/modulefiles:/etc/modulefiles
LOADEDMODULES=
SHLVL=1
SUDO_COMMAND=/bin/su jr
HOME=/home/jr
LOGNAME=jr
XDG_DATA_DIRS=/home/jr/.local/share/flatpak/exports/share:/var/lib/flatpak/exports/share:/usr/local/share:/usr/share
MODULESHOME=/usr/share/Modules
LESSOPEN=||/usr/bin/lesspipe.sh %s
SUDO_GID=2050
BASH_FUNC_module()=() {  eval `/usr/bin/modulecmd bash $*`
}
_=/bin/podman
OLDPWD=/home/bluetechlab
TMPDIR=/var/tmp
XDG_RUNTIME_DIR=/tmp/run-2063
XDG_CONFIG_HOME=/home/jr/.config
_CONTAINERS_USERNS_CONFIGURED=done
_CONTAINERS_ROOTLESS_UID=2063
_CONTAINERS_ROOTLESS_GID=2063
DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/0/bus
CONTAINERS_GRAPHROOT=/home/jr/.local/share/containers/storage
CONTAINERS_RUNROOT=/tmp/run-2063/containers
  1. podman unshare cat /proc/self/mountinfo
[jr@X ~]$ podman unshare cat /proc/self/mountinfo
228 226 8:4 / / rw,relatime master:1 - xfs /dev/sda4 rw,attr2,inode64,sunit=512,swidth=512,noquota
229 228 0:5 / /dev rw,nosuid master:2 - devtmpfs devtmpfs rw,size=98768300k,nr_inodes=24692075,mode=755
230 229 0:19 / /dev/shm rw,nosuid,nodev master:3 - tmpfs tmpfs rw
231 229 0:12 / /dev/pts rw,nosuid,noexec,relatime master:4 - devpts devpts rw,gid=5,mode=620,ptmxmode=000
232 229 0:37 / /dev/hugepages rw,relatime master:26 - hugetlbfs hugetlbfs rw
233 229 0:15 / /dev/mqueue rw,relatime master:27 - mqueue mqueue rw
275 229 0:33 / /dev/cpuset rw,relatime master:28 - cgroup cgroup rw,cpuset
285 228 0:3 / /proc rw,nosuid,nodev,noexec,relatime master:5 - proc proc rw
370 285 0:16 / /proc/sys/fs/binfmt_misc rw,relatime master:24 - autofs systemd-1 rw,fd=22,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=26722
371 370 0:41 / /proc/sys/fs/binfmt_misc rw,relatime master:176 - binfmt_misc binfmt_misc rw
372 228 0:18 / /sys rw,nosuid,nodev,noexec,relatime master:6 - sysfs sysfs rw
373 372 0:17 / /sys/kernel/security rw,nosuid,nodev,noexec,relatime master:7 - securityfs securityfs rw
374 372 0:21 / /sys/fs/cgroup ro,nosuid,nodev,noexec master:8 - tmpfs tmpfs ro,mode=755
375 374 0:22 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime master:9 - cgroup cgroup rw,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd
376 374 0:25 / /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime master:10 - cgroup cgroup rw,perf_event
377 374 0:26 / /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime master:11 - cgroup cgroup rw,memory
378 374 0:27 / /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime master:12 - cgroup cgroup rw,net_prio,net_cls
379 374 0:28 / /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime master:13 - cgroup cgroup rw,freezer
380 374 0:29 / /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime master:14 - cgroup cgroup rw,devices
381 374 0:30 / /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime master:15 - cgroup cgroup rw,blkio
382 374 0:31 / /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime master:16 - cgroup cgroup rw,pids
383 374 0:32 / /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime master:17 - cgroup cgroup rw,cpuacct,cpu
384 374 0:33 / /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime master:18 - cgroup cgroup rw,cpuset
385 374 0:34 / /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime master:19 - cgroup cgroup rw,hugetlb
386 372 0:23 / /sys/fs/pstore rw,nosuid,nodev,noexec,relatime master:20 - pstore pstore rw
387 372 0:24 / /sys/firmware/efi/efivars rw,nosuid,nodev,noexec,relatime master:21 - efivarfs efivarfs rw
388 372 0:35 / /sys/kernel/config rw,relatime master:22 - configfs configfs rw
389 372 0:6 / /sys/kernel/debug rw,relatime master:25 - debugfs debugfs rw
390 228 0:20 / /run rw,nosuid,nodev master:23 - tmpfs tmpfs rw,mode=755
391 390 0:40 / /run/user/2050 rw,nosuid,nodev,relatime master:124 - tmpfs tmpfs rw,size=19756892k,mode=700,uid=2050,gid=2050
392 390 0:42 / /run/user/0 rw,nosuid,nodev,relatime master:251 - tmpfs tmpfs rw,size=19756892k,mode=700
393 392 0:43 / /run/user/0/gvfs rw,nosuid,nodev,relatime master:258 - fuse.gvfsd-fuse gvfsd-fuse rw,user_id=0,group_id=0
394 390 0:20 /netns /run/netns rw,nosuid,nodev master:23 - tmpfs tmpfs rw,mode=755
395 390 0:44 / /run/user/2062 rw,nosuid,nodev,relatime master:184 - tmpfs tmpfs rw,size=19756892k,mode=700,uid=2062,gid=2062
396 228 8:3 / /home rw,relatime master:29 - xfs /dev/sda3 rw,attr2,inode64,sunit=512,swidth=512,noquota
397 228 8:2 / /boot rw,relatime master:30 - xfs /dev/sda2 rw,attr2,inode64,sunit=512,swidth=512,noquota
398 397 8:1 / /boot/efi rw,relatime master:34 - vfat /dev/sda1 rw,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=winnt,errors=remount-ro
399 228 8:17 / /data rw,relatime master:31 - xfs /dev/sdb1 rw,attr2,inode64,sunit=512,swidth=512,noquota
401 228 8:65 / /SBC2 rw,relatime master:32 - xfs /dev/sde1 rw,attr2,inode64,noquota
496 228 8:49 / /SBC1 rw,relatime master:33 - xfs /dev/sdd1 rw,attr2,inode64,noquota
497 228 0:38 / /var/lib/nfs/rpc_pipefs rw,relatime master:35 - rpc_pipefs sunrpc rw
519 228 8:4 /tmp/run-2063/netns /tmp/run-2063/netns rw,relatime shared:168 master:1 - xfs /dev/sda4 rw,attr2,inode64,sunit=512,swidth=512,noquota
564 396 8:3 /jr/.local/share/containers/storage/overlay /home/jr/.local/share/containers/storage/overlay rw,relatime - xfs /dev/sda3 rw,attr2,inode64,sunit=512,swidth=512,noquota
  1. Both produce the same output:
    podman unshare strace -v -s 4096 -f -o rootless-repro-strace.txt podman run --cgroup-manager=cgroupfs --log-level=debug -it fedora bash
    podman unshare strace -v -s 4096 -f -o rootless-repro-strace.txt podman run --log-level=debug -it fedora bash
[jr@X ~]$ podman unshare strace -v -s 4096 -f -o rootless-repro-strace.txt podman run --log-level=debug -it fedora bash
WARN[0000] The cgroups manager is set to systemd but there is no systemd user session available
WARN[0000] For using systemd, you may need to login using an user session
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 2063` (possibly as root)
WARN[0000] Falling back to --cgroup-manager=cgroupfs and --events-backend=file
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /home/jr/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /home/jr/.local/share/containers/storage
DEBU[0000] Using run root /tmp/run-2063/containers
DEBU[0000] Using static dir /home/jr/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /tmp/run-2063/libpod/tmp
DEBU[0000] Using volume path /home/jr/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] overlay: mount_program=/bin/fuse-overlayfs
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false
DEBU[0000] Initializing event backend file
DEBU[0000] using runtime "/usr/bin/runc"
DEBU[0000] Failed to add podman to systemd sandbox cgroup: dial unix /run/user/0/bus: connect: permission denied
INFO[0000] running as rootless
WARN[0000] The cgroups manager is set to systemd but there is no systemd user session available
WARN[0000] For using systemd, you may need to login using an user session
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 2063` (possibly as root)
WARN[0000] Falling back to --cgroup-manager=cgroupfs and --events-backend=file
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /home/jr/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /home/jr/.local/share/containers/storage
DEBU[0000] Using run root /tmp/run-2063/containers
DEBU[0000] Using static dir /home/jr/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /tmp/run-2063/libpod/tmp
DEBU[0000] Using volume path /home/jr/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] Initializing event backend file
DEBU[0000] using runtime "/usr/bin/runc"
DEBU[0000] parsed reference into "[overlay@/home/jr/.local/share/containers/storage+/tmp/run-2063/containers:overlay.mount_program=/bin/fuse-overlayfs]docker.io/library/fedora:latest"
DEBU[0000] parsed reference into "[overlay@/home/jr/.local/share/containers/storage+/tmp/run-2063/containers:overlay.mount_program=/bin/fuse-overlayfs]@a368cbcfa6789bc347345f6d19132afe138b62ff5373d2aa5f37120277c90b54"
DEBU[0000] exporting opaque data as blob "sha256:a368cbcfa6789bc347345f6d19132afe138b62ff5373d2aa5f37120277c90b54"
DEBU[0000] No hostname set; container's hostname will default to runtime default
DEBU[0000] Using slirp4netns netmode
DEBU[0000] created OCI spec and options for new container
DEBU[0000] Allocated lock 21 for container 9558018efae997e0a874abc5623fcd996956812ff555b232faec5b985af93bc4
DEBU[0000] parsed reference into "[overlay@/home/jr/.local/share/containers/storage+/tmp/run-2063/containers:overlay.mount_program=/bin/fuse-overlayfs]@a368cbcfa6789bc347345f6d19132afe138b62ff5373d2aa5f37120277c90b54"
DEBU[0000] exporting opaque data as blob "sha256:a368cbcfa6789bc347345f6d19132afe138b62ff5373d2aa5f37120277c90b54"
DEBU[0000] created container "9558018efae997e0a874abc5623fcd996956812ff555b232faec5b985af93bc4"
DEBU[0000] container "9558018efae997e0a874abc5623fcd996956812ff555b232faec5b985af93bc4" has work directory "/home/jr/.local/share/containers/storage/overlay-containers/9558018efae997e0a874abc5623fcd996956812ff555b232faec5b985af93bc4/userdata"
DEBU[0000] container "9558018efae997e0a874abc5623fcd996956812ff555b232faec5b985af93bc4" has run directory "/tmp/run-2063/containers/overlay-containers/9558018efae997e0a874abc5623fcd996956812ff555b232faec5b985af93bc4/userdata"
DEBU[0000] New container created "9558018efae997e0a874abc5623fcd996956812ff555b232faec5b985af93bc4"
DEBU[0000] container "9558018efae997e0a874abc5623fcd996956812ff555b232faec5b985af93bc4" has CgroupParent "/libpod_parent/libpod-9558018efae997e0a874abc5623fcd996956812ff555b232faec5b985af93bc4"
DEBU[0000] Handling terminal attach
DEBU[0000] Made network namespace at /tmp/run-2063/netns/cni-529ec1a3-358e-0175-9001-2724093453a7 for container 9558018efae997e0a874abc5623fcd996956812ff555b232faec5b985af93bc4
DEBU[0000] overlay: mount_data=lowerdir=/home/jr/.local/share/containers/storage/overlay/l/PQAXVJJEQRYHUB42ILADK5RH23,upperdir=/home/jr/.local/share/containers/storage/overlay/19cd1a39ba17b433f7b6837d019e565f177503199aa7cb435d1585161be97c83/diff,workdir=/home/jr/.local/share/containers/storage/overlay/19cd1a39ba17b433f7b6837d019e565f177503199aa7cb435d1585161be97c83/work
DEBU[0000] slirp4netns command: /bin/slirp4netns --disable-host-loopback --mtu 65520 --enable-sandbox -c -e 3 -r 4 --netns-type=path /tmp/run-2063/netns/cni-529ec1a3-358e-0175-9001-2724093453a7 tap0
DEBU[0000] mounted container "9558018efae997e0a874abc5623fcd996956812ff555b232faec5b985af93bc4" at "/home/jr/.local/share/containers/storage/overlay/19cd1a39ba17b433f7b6837d019e565f177503199aa7cb435d1585161be97c83/merged"
DEBU[0000] Created root filesystem for container 9558018efae997e0a874abc5623fcd996956812ff555b232faec5b985af93bc4 at /home/jr/.local/share/containers/storage/overlay/19cd1a39ba17b433f7b6837d019e565f177503199aa7cb435d1585161be97c83/merged
INFO[0000] No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]
INFO[0000] IPv6 enabled; Adding default IPv6 external servers: [nameserver 2001:4860:4860::8888 nameserver 2001:4860:4860::8844]
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode secret
DEBU[0000] Created OCI spec for container 9558018efae997e0a874abc5623fcd996956812ff555b232faec5b985af93bc4 at /home/jr/.local/share/containers/storage/overlay-containers/9558018efae997e0a874abc5623fcd996956812ff555b232faec5b985af93bc4/userdata/config.json
DEBU[0000] /usr/bin/conmon messages will be logged to syslog
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -c 9558018efae997e0a874abc5623fcd996956812ff555b232faec5b985af93bc4 -u 9558018efae997e0a874abc5623fcd996956812ff555b232faec5b985af93bc4 -r /usr/bin/runc -b /home/jr/.local/share/containers/storage/overlay-containers/9558018efae997e0a874abc5623fcd996956812ff555b232faec5b985af93bc4/userdata -p /tmp/run-2063/containers/overlay-containers/9558018efae997e0a874abc5623fcd996956812ff555b232faec5b985af93bc4/userdata/pidfile -l k8s-file:/home/jr/.local/share/containers/storage/overlay-containers/9558018efae997e0a874abc5623fcd996956812ff555b232faec5b985af93bc4/userdata/ctr.log --exit-dir /tmp/run-2063/libpod/tmp/exits --socket-dir-path /tmp/run-2063/libpod/tmp/socket --log-level debug --syslog -t --conmon-pidfile /tmp/run-2063/containers/overlay-containers/9558018efae997e0a874abc5623fcd996956812ff555b232faec5b985af93bc4/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/jr/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /tmp/run-2063/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /tmp/run-2063/libpod/tmp --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mount_program=/bin/fuse-overlayfs --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 9558018efae997e0a874abc5623fcd996956812ff555b232faec5b985af93bc4]"
WARN[0000] Failed to add conmon to cgroupfs sandbox cgroup: write /sys/fs/cgroup/cpu/libpod_parent/conmon/tasks: open /sys/fs/cgroup/cpu/libpod_parent/conmon/tasks: permission denied
DEBU[0000] Received: -1
DEBU[0000] Cleaning up container 9558018efae997e0a874abc5623fcd996956812ff555b232faec5b985af93bc4
DEBU[0000] Tearing down network namespace at /tmp/run-2063/netns/cni-529ec1a3-358e-0175-9001-2724093453a7 for container 9558018efae997e0a874abc5623fcd996956812ff555b232faec5b985af93bc4
DEBU[0000] Error unmounting /home/jr/.local/share/containers/storage/overlay/19cd1a39ba17b433f7b6837d019e565f177503199aa7cb435d1585161be97c83/merged with fusermount3 - exec: "fusermount3": executable file not found in $PATH
DEBU[0000] unmounted container "9558018efae997e0a874abc5623fcd996956812ff555b232faec5b985af93bc4"
DEBU[0000] ExitCode msg: "time=\"2020-07-25t17:06:51-05:00\" level=warning msg=\"signal: killed\"\ntime=\"2020-07-25t17:06:51-05:00\" level=error msg=\"container_linux.go:349: starting container process caused \\\"process_linux.go:297: applying cgroup configuration for process caused \\\\\\\"mountpoint for devices not found\\\\\\\"\\\"\"\ncontainer_linux.go:349: starting container process caused \"process_linux.go:297: applying cgroup configuration for process caused \\\"mountpoint for devices not found\\\"\": oci runtime error"
ERRO[0000] time="2020-07-25T17:06:51-05:00" level=warning msg="signal: killed"
time="2020-07-25T17:06:51-05:00" level=error msg="container_linux.go:349: starting container process caused \"process_linux.go:297: applying cgroup configuration for process caused \\\"mountpoint for devices not found\\\"\""
container_linux.go:349: starting container process caused "process_linux.go:297: applying cgroup configuration for process caused \"mountpoint for devices not found\"": OCI runtime error
Error: exit status 127

Strace attached:
rootless-repro-strace.txt

@giuseppe
Copy link
Member

I think runc might get confused by cpuset being mounted twice:

384 374 0:33 / /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime master:18 - cgroup cgroup rw,cpuset
275 229 0:33 / /dev/cpuset rw,relatime master:28 - cgroup cgroup rw,cpuset

Do you know where the /dev/cpuset mount is coming from? That doesn't seem correct

@a-trout-in-the-milk
Copy link

Hmm. I do not.
Let me do some digging...

@a-trout-in-the-milk
Copy link

a-trout-in-the-milk commented Jul 28, 2020

@giuseppe Thanks for this pointer, Red Hat has verified that having this errant cgroup mounted does in fact cause this error when running rootless.

  1. Mount errant cgroup (any cgroup outside the default /sys/fs/... hierarchy)
  2. Then install podman
  3. Rootless podman run attempts fail consistently:
[jr@rhel7 ~]$ podman run --rm -i -t docker.io/busybox echo hello
Error: container_linux.go:349: starting container process caused "process_linux.go:297: applying cgroup configuration for process caused \"mountpoint for devices not found\"": OCI runtime error

On this now-failing repro env, unmounting errant cgroup(s) and running a podman system migrate resolves the failure.

Not sure if this has, or will, ever come up outside edge cases like ours. I don't think there's any "fix" needed here as I believe it is reasonable to expect that the systemd init system is the only author into a single V1 cgroup hierarchy - Edited for clarity: deviations from default controller location should be considered violations even when not managed by systemd.

I've got an internal thread to find out what happens on in-place upgrade of supported-path RHEL6 (pre-systemd - libcgroup mounted cgroups at /cgroup) to RHEL7; I'll leave this issue open for a couple days until I've got an answer to that re. potential scope. Regardless, the solution is removing errant cgroup mounts and running podman system migrate.

@giuseppe
Copy link
Member

@a-trout-in-the-milk thanks for confirming it.

I don't think such edge cases are going to be addressed in RHEL 7 anyway.

Feel free to include me in any Red Hat discussion on the problem you are having, but let's close the issue here as it cannot be addressed anyway in future versions of Podman. For cgroup v2, we are already assuming all over the stack that cgroups are mounted at /sys/fs/cgroup

@jeffcbecker
Copy link

I think runc might get confused by cpuset being mounted twice:

384 374 0:33 / /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime master:18 - cgroup cgroup rw,cpuset
275 229 0:33 / /dev/cpuset rw,relatime master:28 - cgroup cgroup rw,cpuset

Do you know where the /dev/cpuset mount is coming from? That doesn't seem correct

@giuseppe do you know if this is still an issue with RHEL8, i.e., if rootless podman has problems with the two mounted cpusets?

@giuseppe
Copy link
Member

giuseppe commented Oct 8, 2021

Could you try if it works for you with crun instead of runc? crun is available in RHEL 8

@jeffcbecker
Copy link

Unfortunately, we won't be moving to RHEL8 for a few months so I am unable to try your suggestion. However, I found a workaround. If I unmount /dev/cpuset, and then remount it, rootless Podman works. It appears the relative order of /sys/fs/cgroup/cpuset and /dev/cpuset in /proc/self/mountinfo matters.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 21, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests

6 participants