-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Podman hangs at run #4079
Comments
Can you add --log-level=debug to your Podman run command and paste the
output?
…On Sun, Sep 22, 2019, 15:24 Lumi Schallenberg ***@***.***> wrote:
*Steps to reproduce the issue:*
1.
Setup arch, pacman -S podman, set up /etc/subuid and /etc/subgid as
described in the documentation
2.
write runtime = "runc" to /etc/containers/libpod.conf because it uses
crun by default (which doesn't exist on arch)
3.
podman run --rm -it alpine
*Describe the results you received:*
Trying to pull docker.io/library/alpine...
Getting image source signatures
Copying blob 9d48c3bd43c5 done
Copying config 9617696764 done
Writing manifest to image destination
Storing signatures
Error: container creation timeout: internal libpod error
*Describe the results you expected:*
I get dropped into an ash session in an Alpine container
*Additional information you deem important (e.g. issue happens only
occasionally):*
*Output of podman version:*
podman version 1.5.1
*Output of podman info --debug:*
debug:
compiler: gc
git commit: ""
go version: go1.12.8
podman version: 1.5.1
host:
BuildahVersion: 1.10.1
Conmon:
package: Unknown
path: /usr/bin/conmon
version: 'conmon version 2.0.0, commit: e217fdff82e0b1a6184a28c43043a4065083407f'
Distribution:
distribution: arch
version: unknown
MemFree: 6101864448
MemTotal: 8159133696
OCIRuntime:
package: Unknown
path: /usr/bin/runc
version: |-
runc version 1.0.0-rc8
commit: 425e105d5a03fabd737a126ad93d62a9eeede87f
spec: 1.0.1-dev
SwapFree: 12884897792
SwapTotal: 12884897792
arch: amd64
cpus: 4
eventlogger: journald
hostname: Lumi-ThinkPad
kernel: 5.3.0-arch1-1-ARCH
os: linux
rootless: true
uptime: 13m 23.94s
registries:
blocked: null
insecure: null
search:
- docker.io
- registry.fedoraproject.org
- quay.io
- registry.access.redhat.com
- registry.centos.org
store:
ConfigFile: /home/lumi/.config/containers/storage.conf
ContainerStore:
number: 0
GraphDriverName: vfs
GraphOptions: null
GraphRoot: /home/lumi/.local/share/containers/storage
GraphStatus: {}
ImageStore:
number: 1
RunRoot: /run/user/1000
VolumePath: /home/lumi/.local/share/containers/storage/volumes
*Additional environment details (AWS, VirtualBox, physical, etc.):*
- Running on a ThinkPad x230; on a fresh install of Arch Linux.
- /etc/subuid: lumi:10000:55537
- /etc/subgid: lumi:10000:55537
- possibly relevant: it says GraphDriverName: vfs in the debug output,
but in /etc/containers/storage.conf the storage driver was configured
to be overlayfs.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#4079>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AB3AOCAEG5E6M66E5Z555NDQK7A75ANCNFSM4IZEENCA>
.
|
|
|
Expected error for rootless. @lumi-sch Can you re-run that |
|
On my system, that
|
So I think this means that Conmon is hanging waiting for |
Out of curiosity - is anything different if you run Podman without |
We have to be hanging in this section of I don't see anything obvious in here. @haircommander Any ideas? Might help to find Conmon's PID and attach with strace, figure out where it's hanging. |
journalctl:
|
Hm. Exactly identical. Not terminal vs nonterminal code, then. Interesting. |
In case it's useful to anyone, I collected strace logs of 12376.txt |
@lumi-sch I took the time to setup an arch vm and try to reproduce your report. I think the issue boils down to configuration files and I think step 2 is your culprit. You do not need to do your step #2 on arch. The defaults for podman users are taken from If you are trying to use cgroupsv2, however, you must use crun. The current and upstream runc code is not cgroupsv2 capable. |
@baude I tried to do that, and it turns out the configuration file already said |
@lumi-sch any chance you could join us on IRC? freenode #podman maybe we can actively debug this. |
Installing fuse-overlayfs (from AUR) fixes this issue. |
Steps to reproduce the issue:
Setup arch,
pacman -S podman
, set up /etc/subuid and /etc/subgid as described in the documentationwrite
runtime = "runc"
to /etc/containers/libpod.conf because it uses crun by default (which doesn't exist on arch)podman run --rm -it alpine
Describe the results you received:
Describe the results you expected:
I get dropped into an ash session in an Alpine container
Additional information you deem important (e.g. issue happens only occasionally):
Output of
podman version
:Output of
podman info --debug
:Additional environment details (AWS, VirtualBox, physical, etc.):
lumi:10000:55537
lumi:10000:55537
GraphDriverName: vfs
in the debug output, but in/etc/containers/storage.conf
the storage driver was configured to beoverlayfs
.The text was updated successfully, but these errors were encountered: