Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Podman hangs at run #4079

Closed
jess-sch opened this issue Sep 22, 2019 · 16 comments
Closed

Podman hangs at run #4079

jess-sch opened this issue Sep 22, 2019 · 16 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@jess-sch
Copy link

jess-sch commented Sep 22, 2019

Steps to reproduce the issue:

  1. Setup arch, pacman -S podman, set up /etc/subuid and /etc/subgid as described in the documentation

  2. write runtime = "runc" to /etc/containers/libpod.conf because it uses crun by default (which doesn't exist on arch)

  3. podman run --rm -it alpine

Describe the results you received:

Trying to pull docker.io/library/alpine...
Getting image source signatures
Copying blob 9d48c3bd43c5 done
Copying config 9617696764 done
Writing manifest to image destination
Storing signatures
Error: container creation timeout: internal libpod error

Describe the results you expected:
I get dropped into an ash session in an Alpine container

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

podman version 1.5.1

Output of podman info --debug:

debug:
  compiler: gc
  git commit: ""
  go version: go1.12.8
  podman version: 1.5.1
host:
  BuildahVersion: 1.10.1
  Conmon:
    package: Unknown
    path: /usr/bin/conmon
    version: 'conmon version 2.0.0, commit: e217fdff82e0b1a6184a28c43043a4065083407f'
  Distribution:
    distribution: arch
    version: unknown
  MemFree: 6101864448
  MemTotal: 8159133696
  OCIRuntime:
    package: Unknown
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc8
      commit: 425e105d5a03fabd737a126ad93d62a9eeede87f
      spec: 1.0.1-dev
  SwapFree: 12884897792
  SwapTotal: 12884897792
  arch: amd64
  cpus: 4
  eventlogger: journald
  hostname: Lumi-ThinkPad
  kernel: 5.3.0-arch1-1-ARCH
  os: linux
  rootless: true
  uptime: 13m 23.94s
registries:
  blocked: null
  insecure: null
  search:
  - docker.io
  - registry.fedoraproject.org
  - quay.io
  - registry.access.redhat.com
  - registry.centos.org
store:
  ConfigFile: /home/lumi/.config/containers/storage.conf
  ContainerStore:
    number: 0
  GraphDriverName: vfs
  GraphOptions: null
  GraphRoot: /home/lumi/.local/share/containers/storage
  GraphStatus: {}
  ImageStore:
    number: 1
  RunRoot: /run/user/1000
  VolumePath: /home/lumi/.local/share/containers/storage/volumes

Additional environment details (AWS, VirtualBox, physical, etc.):

  • Running on a ThinkPad x230; on a fresh install of Arch Linux.
  • /etc/subuid: lumi:10000:55537
  • /etc/subgid: lumi:10000:55537
  • possibly relevant: it says GraphDriverName: vfs in the debug output, but in /etc/containers/storage.conf the storage driver was configured to be overlayfs.
  • it does work as root
@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Sep 22, 2019
@mheon
Copy link
Member

mheon commented Sep 22, 2019 via email

@jess-sch
Copy link
Author

jess-sch commented Sep 22, 2019

Can you add --log-level=debug to your Podman run command and paste the output?

INFO[0000] running as rootless                          
DEBU[0000] using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /home/lumi/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver vfs                       
DEBU[0000] Using graph root /home/lumi/.local/share/containers/storage 
DEBU[0000] Using run root /run/user/1000                
DEBU[0000] Using static dir /home/lumi/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp      
DEBU[0000] Using volume path /home/lumi/.local/share/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] [graphdriver] trying provided driver "vfs"   
DEBU[0000] Initializing event backend journald          
DEBU[0000] using runtime "/usr/bin/runc"                
WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument 
DEBU[0000] parsed reference into "[vfs@/home/lumi/.local/share/containers/storage+/run/user/1000]docker.io/library/alpine:latest" 
DEBU[0000] reference "[vfs@/home/lumi/.local/share/containers/storage+/run/user/1000]docker.io/library/alpine:latest" does not resolve to an image ID 
DEBU[0000] parsed reference into "[vfs@/home/lumi/.local/share/containers/storage+/run/user/1000]localhost/alpine:latest" 
DEBU[0000] reference "[vfs@/home/lumi/.local/share/containers/storage+/run/user/1000]localhost/alpine:latest" does not resolve to an image ID 
DEBU[0000] parsed reference into "[vfs@/home/lumi/.local/share/containers/storage+/run/user/1000]docker.io/library/alpine:latest" 
DEBU[0000] parsed reference into "[vfs@/home/lumi/.local/share/containers/storage+/run/user/1000]registry.fedoraproject.org/alpine:latest" 
DEBU[0000] parsed reference into "[vfs@/home/lumi/.local/share/containers/storage+/run/user/1000]quay.io/alpine:latest" 
DEBU[0000] parsed reference into "[vfs@/home/lumi/.local/share/containers/storage+/run/user/1000]registry.access.redhat.com/alpine:latest" 
DEBU[0000] parsed reference into "[vfs@/home/lumi/.local/share/containers/storage+/run/user/1000]registry.centos.org/alpine:latest" 
Trying to pull docker.io/library/alpine...
DEBU[0000] reference rewritten from 'docker.io/library/alpine:latest' to 'docker.io/library/alpine:latest' 
DEBU[0000] Trying to pull "docker.io/library/alpine:latest" 
DEBU[0000] Credentials not found                        
DEBU[0000] Using registries.d directory /etc/containers/registries.d for sigstore configuration 
DEBU[0000]  Using "default-docker" configuration        
DEBU[0000]  No signature storage configuration found for docker.io/library/alpine:latest 
DEBU[0000] Looking for TLS certificates and private keys in /etc/docker/certs.d/docker.io 
DEBU[0000] GET https://registry-1.docker.io/v2/         
DEBU[0000] Ping https://registry-1.docker.io/v2/ status 401 
DEBU[0000] GET https://auth.docker.io/token?scope=repository%3Alibrary%2Falpine%3Apull&service=registry.docker.io 
DEBU[0001] GET https://registry-1.docker.io/v2/library/alpine/manifests/latest 
DEBU[0001] Using blob info cache at /home/lumi/.local/share/containers/cache/blob-info-cache-v1.boltdb 
DEBU[0001] Source is a manifest list; copying (only) instance sha256:acd3ca9941a85e8ed16515bfc5328e4e2f8c128caa72959a58a127b7801ee01f 
DEBU[0001] GET https://registry-1.docker.io/v2/library/alpine/manifests/sha256:acd3ca9941a85e8ed16515bfc5328e4e2f8c128caa72959a58a127b7801ee01f 
DEBU[0002] IsRunningImageAllowed for image docker:docker.io/library/alpine:latest 
DEBU[0002]  Using default policy section                
DEBU[0002]  Requirement 0: allowed                      
DEBU[0002] Overall: allowed                             
DEBU[0002] Downloading /v2/library/alpine/blobs/sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4 
DEBU[0002] GET https://registry-1.docker.io/v2/library/alpine/blobs/sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4 
Getting image source signatures
DEBU[0003] Manifest has MIME type application/vnd.docker.distribution.manifest.v2+json, ordered candidate list [application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.v1+prettyjws, application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v1+json] 
DEBU[0003] ... will first try using the original manifest unmodified 
DEBU[0003] Downloading /v2/library/alpine/blobs/sha256:9d48c3bd43c520dc2784e868a780e976b207cbf493eaff8c6596eb871cbd9609 
DEBU[0003] GET https://registry-1.docker.io/v2/library/alpine/blobs/sha256:9d48c3bd43c520dc2784e868a780e976b207cbf493eaff8c6596eb871cbd9609 
DEBU[0003] Detected compression format gzip             
DEBU[0003] Using original blob without modification     
Copying blob 9d48c3bd43c5 done
DEBU[0006] No compression detected                      
DEBU[0006] Using original blob without modification     
Copying config 9617696764 done
Writing manifest to image destination
Storing signatures
DEBU[0006] Start untar layer                            
DEBU[0006] Untar time: 0.20446282s                      
DEBU[0006] setting image creation date to 2019-08-20 20:19:55.211423266 +0000 UTC 
DEBU[0006] created new image ID "961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" 
DEBU[0006] set names of image "961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" to [docker.io/library/alpine:latest] 
DEBU[0006] saved image metadata "{}"                    
DEBU[0006] parsed reference into "[vfs@/home/lumi/.local/share/containers/storage+/run/user/1000]docker.io/library/alpine:latest" 
DEBU[0006] parsed reference into "[vfs@/home/lumi/.local/share/containers/storage+/run/user/1000]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" 
DEBU[0006] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" 
DEBU[0006] parsed reference into "[vfs@/home/lumi/.local/share/containers/storage+/run/user/1000]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" 
DEBU[0006] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" 
DEBU[0006] parsed reference into "[vfs@/home/lumi/.local/share/containers/storage+/run/user/1000]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" 
DEBU[0006] Got mounts: []                               
DEBU[0006] Got volumes: []                              
DEBU[0006] No hostname set; container's hostname will default to runtime default 
DEBU[0006] Using slirp4netns netmode                    
DEBU[0006] created OCI spec and options for new container 
DEBU[0006] Allocated lock 0 for container 6ca78abe3a751ec0d3d87b05cafe600dd38a04323156597eb125820a76103050 
DEBU[0006] parsed reference into "[vfs@/home/lumi/.local/share/containers/storage+/run/user/1000]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" 
DEBU[0006] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" 
DEBU[0006] created container "6ca78abe3a751ec0d3d87b05cafe600dd38a04323156597eb125820a76103050" 
DEBU[0007] container "6ca78abe3a751ec0d3d87b05cafe600dd38a04323156597eb125820a76103050" has work directory "/home/lumi/.local/share/containers/storage/vfs-containers/6ca78abe3a751ec0d3d87b05cafe600dd38a04323156597eb125820a76103050/userdata" 
DEBU[0007] container "6ca78abe3a751ec0d3d87b05cafe600dd38a04323156597eb125820a76103050" has run directory "/run/user/1000/vfs-containers/6ca78abe3a751ec0d3d87b05cafe600dd38a04323156597eb125820a76103050/userdata" 
DEBU[0007] New container created "6ca78abe3a751ec0d3d87b05cafe600dd38a04323156597eb125820a76103050" 
DEBU[0007] container "6ca78abe3a751ec0d3d87b05cafe600dd38a04323156597eb125820a76103050" has CgroupParent "/libpod_parent/libpod-6ca78abe3a751ec0d3d87b05cafe600dd38a04323156597eb125820a76103050" 
DEBU[0007] Handling terminal attach                     
DEBU[0007] mounted container "6ca78abe3a751ec0d3d87b05cafe600dd38a04323156597eb125820a76103050" at "/home/lumi/.local/share/containers/storage/vfs/dir/168821308bbdbd66705554acf861ce39b0d3eb2a6ed5b8fbb5082bd2a03d2964" 
DEBU[0007] Created root filesystem for container 6ca78abe3a751ec0d3d87b05cafe600dd38a04323156597eb125820a76103050 at /home/lumi/.local/share/containers/storage/vfs/dir/168821308bbdbd66705554acf861ce39b0d3eb2a6ed5b8fbb5082bd2a03d2964 
DEBU[0007] /etc/system-fips does not exist on host, not mounting FIPS mode secret 
DEBU[0007] Created OCI spec for container 6ca78abe3a751ec0d3d87b05cafe600dd38a04323156597eb125820a76103050 at /home/lumi/.local/share/containers/storage/vfs-containers/6ca78abe3a751ec0d3d87b05cafe600dd38a04323156597eb125820a76103050/userdata/config.json 
DEBU[0007] /usr/bin/conmon messages will be logged to syslog 
DEBU[0007] running conmon: /usr/bin/conmon               args="[--api-version 1 -c 6ca78abe3a751ec0d3d87b05cafe600dd38a04323156597eb125820a76103050 -u 6ca78abe3a751ec0d3d87b05cafe600dd38a04323156597eb125820a76103050 -r /usr/bin/runc -b /home/lumi/.local/share/containers/storage/vfs-containers/6ca78abe3a751ec0d3d87b05cafe600dd38a04323156597eb125820a76103050/userdata -p /run/user/1000/vfs-containers/6ca78abe3a751ec0d3d87b05cafe600dd38a04323156597eb125820a76103050/userdata/pidfile -l k8s-file:/home/lumi/.local/share/containers/storage/vfs-containers/6ca78abe3a751ec0d3d87b05cafe600dd38a04323156597eb125820a76103050/userdata/ctr.log --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket --log-level debug --syslog -t --conmon-pidfile /run/user/1000/vfs-containers/6ca78abe3a751ec0d3d87b05cafe600dd38a04323156597eb125820a76103050/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/lumi/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000 --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg vfs --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg 6ca78abe3a751ec0d3d87b05cafe600dd38a04323156597eb125820a76103050]"
WARN[0007] Failed to add conmon to cgroupfs sandbox cgroup: error creating cgroup for cpuset: mkdir /sys/fs/cgroup/cpuset/libpod_parent: permission denied 
DEBU[0247] Cleaning up container 6ca78abe3a751ec0d3d87b05cafe600dd38a04323156597eb125820a76103050 
DEBU[0247] Network is already cleaned up, skipping...   
DEBU[0247] unmounted container "6ca78abe3a751ec0d3d87b05cafe600dd38a04323156597eb125820a76103050" 
DEBU[0247] Cleaning up container 6ca78abe3a751ec0d3d87b05cafe600dd38a04323156597eb125820a76103050 
DEBU[0247] Network is already cleaned up, skipping...   
DEBU[0247] Container 6ca78abe3a751ec0d3d87b05cafe600dd38a04323156597eb125820a76103050 storage is already unmounted, skipping... 
DEBU[0247] Container 6ca78abe3a751ec0d3d87b05cafe600dd38a04323156597eb125820a76103050 storage is already unmounted, skipping... 
ERRO[0247] container creation timeout: internal libpod error 

@baude
Copy link
Member

baude commented Sep 23, 2019

WARN[0007] Failed to add conmon to cgroupfs sandbox cgroup: error creating cgroup for cpuset: mkdir /sys/fs/cgroup/cpuset/libpod_parent: permission denied 

@mheon
Copy link
Member

mheon commented Sep 23, 2019

Expected error for rootless.

@lumi-sch Can you re-run that podman run --log-level=debug ... command, and look in syslog (journalctl for systemd distros) for messages from conmon? The real error may be being logged there

@jess-sch
Copy link
Author

@mheon

$ journalctl | grep grep conmon
Sep 23 18:00:37 Lumi-ThinkPad conmon[8066]: conmon abe271e153ad4898209f <ndebug>: failed to write to /proc/self/oom_score_adj: Permission denied
Sep 23 18:00:37 Lumi-ThinkPad conmon[8067]: conmon abe271e153ad4898209f <ninfo>: addr{sun_family=AF_UNIX, sun_path=/tmp/conmon-term.9VFR8Z}
Sep 23 18:00:37 Lumi-ThinkPad conmon[8067]: conmon abe271e153ad4898209f <ninfo>: attach sock path: /run/user/1000/libpod/tmp/socket/abe271e153ad4898209f674f262d80d13de8b074438d8ca3f64786d2e59808b9/attach
Sep 23 18:00:37 Lumi-ThinkPad conmon[8067]: conmon abe271e153ad4898209f <ninfo>: addr{sun_family=AF_UNIX, sun_path=/run/user/1000/libpod/tmp/socket/abe271e153ad4898209f674f262d80d13de8b074438d8ca3f64786d2e59808b9/attach}
Sep 23 18:00:37 Lumi-ThinkPad conmon[8067]: conmon abe271e153ad4898209f <ninfo>: ctl fifo path: /home/lumi/.local/share/containers/storage/vfs-containers/abe271e153ad4898209f674f262d80d13de8b074438d8ca3f64786d2e59808b9/userdata/ctl
Sep 23 18:00:37 Lumi-ThinkPad conmon[8067]: conmon abe271e153ad4898209f <ninfo>: terminal_ctrl_fd: 14
Sep 23 18:00:37 Lumi-ThinkPad conmon[8067]: conmon abe271e153ad4898209f <ninfo>: about to accept from console_socket_fd: 4
Sep 23 18:00:37 Lumi-ThinkPad conmon[8067]: conmon abe271e153ad4898209f <ninfo>: about to recvfd from connfd: 16
Sep 23 18:00:37 Lumi-ThinkPad conmon[8067]: [82B blob data]

@mheon
Copy link
Member

mheon commented Sep 23, 2019

On my system, that [82B blob data] is followed be:

Sep 23 12:09:58 bellerophon.lldp.net conmon[19685]: conmon c2dc29e38185eb868184 <ndebug>: container PID: 19698

@mheon
Copy link
Member

mheon commented Sep 23, 2019

So I think this means that Conmon is hanging waiting for runc to successfully execute - which may very well mean runc is also hanging.

@mheon
Copy link
Member

mheon commented Sep 23, 2019

Out of curiosity - is anything different if you run Podman without -t? podman run --log-level=debug --rm -i alpine?

@mheon
Copy link
Member

mheon commented Sep 23, 2019

We have to be hanging in this section of conmon: https://github.com/containers/conmon/blob/master/src/conmon.c#L1550-L1581

I don't see anything obvious in here. @haircommander Any ideas?

Might help to find Conmon's PID and attach with strace, figure out where it's hanging.

@jess-sch
Copy link
Author

@mheon

podman run --log-level=debug --rm -i alpine

INFO[0000] running as rootless                          
DEBU[0000] using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /home/lumi/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver vfs                       
DEBU[0000] Using graph root /home/lumi/.local/share/containers/storage 
DEBU[0000] Using run root /run/user/1000                
DEBU[0000] Using static dir /home/lumi/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp      
DEBU[0000] Using volume path /home/lumi/.local/share/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] [graphdriver] trying provided driver "vfs"   
DEBU[0000] Initializing event backend journald          
DEBU[0000] using runtime "/usr/bin/runc"                
WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument 
DEBU[0000] parsed reference into "[vfs@/home/lumi/.local/share/containers/storage+/run/user/1000]docker.io/library/alpine:latest" 
DEBU[0000] parsed reference into "[vfs@/home/lumi/.local/share/containers/storage+/run/user/1000]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" 
DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" 
DEBU[0000] parsed reference into "[vfs@/home/lumi/.local/share/containers/storage+/run/user/1000]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" 
DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" 
DEBU[0000] parsed reference into "[vfs@/home/lumi/.local/share/containers/storage+/run/user/1000]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" 
DEBU[0000] Got mounts: []                               
DEBU[0000] Got volumes: []                              
DEBU[0000] No hostname set; container's hostname will default to runtime default 
DEBU[0000] Using slirp4netns netmode                    
DEBU[0000] created OCI spec and options for new container 
DEBU[0000] Allocated lock 0 for container 564391ab492f7751b116de42645394b8a6d4ec8e1fa406d605594b49eaf7a2ae 
DEBU[0000] parsed reference into "[vfs@/home/lumi/.local/share/containers/storage+/run/user/1000]@961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" 
DEBU[0000] exporting opaque data as blob "sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4" 
DEBU[0000] created container "564391ab492f7751b116de42645394b8a6d4ec8e1fa406d605594b49eaf7a2ae" 
DEBU[0000] container "564391ab492f7751b116de42645394b8a6d4ec8e1fa406d605594b49eaf7a2ae" has work directory "/home/lumi/.local/share/containers/storage/vfs-containers/564391ab492f7751b116de42645394b8a6d4ec8e1fa406d605594b49eaf7a2ae/userdata" 
DEBU[0000] container "564391ab492f7751b116de42645394b8a6d4ec8e1fa406d605594b49eaf7a2ae" has run directory "/run/user/1000/vfs-containers/564391ab492f7751b116de42645394b8a6d4ec8e1fa406d605594b49eaf7a2ae/userdata" 
DEBU[0000] New container created "564391ab492f7751b116de42645394b8a6d4ec8e1fa406d605594b49eaf7a2ae" 
DEBU[0000] container "564391ab492f7751b116de42645394b8a6d4ec8e1fa406d605594b49eaf7a2ae" has CgroupParent "/libpod_parent/libpod-564391ab492f7751b116de42645394b8a6d4ec8e1fa406d605594b49eaf7a2ae" 
DEBU[0000] mounted container "564391ab492f7751b116de42645394b8a6d4ec8e1fa406d605594b49eaf7a2ae" at "/home/lumi/.local/share/containers/storage/vfs/dir/ee071bc55c8126563bf92aeb8bc6b923b83715082df495d34574c04c939b2104" 
DEBU[0000] Created root filesystem for container 564391ab492f7751b116de42645394b8a6d4ec8e1fa406d605594b49eaf7a2ae at /home/lumi/.local/share/containers/storage/vfs/dir/ee071bc55c8126563bf92aeb8bc6b923b83715082df495d34574c04c939b2104 
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode secret 
DEBU[0000] Created OCI spec for container 564391ab492f7751b116de42645394b8a6d4ec8e1fa406d605594b49eaf7a2ae at /home/lumi/.local/share/containers/storage/vfs-containers/564391ab492f7751b116de42645394b8a6d4ec8e1fa406d605594b49eaf7a2ae/userdata/config.json 
DEBU[0000] /usr/bin/conmon messages will be logged to syslog 
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -c 564391ab492f7751b116de42645394b8a6d4ec8e1fa406d605594b49eaf7a2ae -u 564391ab492f7751b116de42645394b8a6d4ec8e1fa406d605594b49eaf7a2ae -r /usr/bin/runc -b /home/lumi/.local/share/containers/storage/vfs-containers/564391ab492f7751b116de42645394b8a6d4ec8e1fa406d605594b49eaf7a2ae/userdata -p /run/user/1000/vfs-containers/564391ab492f7751b116de42645394b8a6d4ec8e1fa406d605594b49eaf7a2ae/userdata/pidfile -l k8s-file:/home/lumi/.local/share/containers/storage/vfs-containers/564391ab492f7751b116de42645394b8a6d4ec8e1fa406d605594b49eaf7a2ae/userdata/ctr.log --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket --log-level debug --syslog -i --conmon-pidfile /run/user/1000/vfs-containers/564391ab492f7751b116de42645394b8a6d4ec8e1fa406d605594b49eaf7a2ae/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/lumi/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000 --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg vfs --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg 564391ab492f7751b116de42645394b8a6d4ec8e1fa406d605594b49eaf7a2ae]"
WARN[0000] Failed to add conmon to cgroupfs sandbox cgroup: error creating cgroup for memory: mkdir /sys/fs/cgroup/memory/libpod_parent: permission denied 
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

DEBU[0240] Cleaning up container 564391ab492f7751b116de42645394b8a6d4ec8e1fa406d605594b49eaf7a2ae 
DEBU[0240] Network is already cleaned up, skipping...   
DEBU[0240] unmounted container "564391ab492f7751b116de42645394b8a6d4ec8e1fa406d605594b49eaf7a2ae" 
DEBU[0240] Cleaning up container 564391ab492f7751b116de42645394b8a6d4ec8e1fa406d605594b49eaf7a2ae 
DEBU[0240] Network is already cleaned up, skipping...   
DEBU[0240] Container 564391ab492f7751b116de42645394b8a6d4ec8e1fa406d605594b49eaf7a2ae storage is already unmounted, skipping... 
DEBU[0240] Container 564391ab492f7751b116de42645394b8a6d4ec8e1fa406d605594b49eaf7a2ae storage is already unmounted, skipping... 
ERRO[0240] container creation timeout: internal libpod error 

journalctl:

Sep 23 19:01:39 Lumi-ThinkPad conmon[8698]: conmon 564391ab492f7751b116 <ndebug>: failed to write to /proc/self/oom_score_adj: Permission denied
Sep 23 19:01:39 Lumi-ThinkPad conmon[8699]: conmon 564391ab492f7751b116 <ninfo>: attach sock path: /run/user/1000/libpod/tmp/socket/564391ab492f7751b116de42645394b8a6d4ec8e1fa406d605594b49eaf7a2ae/attach
Sep 23 19:01:39 Lumi-ThinkPad conmon[8699]: conmon 564391ab492f7751b116 <ninfo>: addr{sun_family=AF_UNIX, sun_path=/run/user/1000/libpod/tmp/socket/564391ab492f7751b116de42645394b8a6d4ec8e1fa406d605594b49eaf7a2ae/attach}
Sep 23 19:01:39 Lumi-ThinkPad conmon[8699]: conmon 564391ab492f7751b116 <ninfo>: ctl fifo path: /home/lumi/.local/share/containers/storage/vfs-containers/564391ab492f7751b116de42645394b8a6d4ec8e1fa406d605594b49eaf7a2ae/userdata/ctl
Sep 23 19:01:39 Lumi-ThinkPad conmon[8699]: conmon 564391ab492f7751b116 <ninfo>: terminal_ctrl_fd: 13

@mheon
Copy link
Member

mheon commented Sep 23, 2019

Hm. Exactly identical. Not terminal vs nonterminal code, then. Interesting.

@baude
Copy link
Member

baude commented Sep 23, 2019

@lumi-sch I took the time to setup an arch vm and try to reproduce your report. I think the issue boils down to configuration files and I think step 2 is your culprit. You do not need to do your step #2 on arch. The defaults for podman users are taken from /usr/share/containers/libpod.conf. If you need to make changes to the default configuration, you must copy the configuration from /usr/share/containers to /etc/containers. Then edit the version in /etc/containers.

If you are trying to use cgroupsv2, however, you must use crun. The current and upstream runc code is not cgroupsv2 capable.

@jess-sch
Copy link
Author

@baude I tried to do that, and it turns out the configuration file already said runtime = "runc"... However, the issue persists.

@baude
Copy link
Member

baude commented Sep 23, 2019

@lumi-sch any chance you could join us on IRC? freenode #podman maybe we can actively debug this.

@jess-sch
Copy link
Author

jess-sch commented Oct 5, 2019

Installing fuse-overlayfs (from AUR) fixes this issue.

@jess-sch jess-sch closed this as completed Oct 5, 2019
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 23, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 23, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests

4 participants