Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error streaming container content for copy up into volume [...] : copier: get: globs [/rabbitmq] #9432

Closed
martinpitt opened this issue Feb 19, 2021 · 2 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@martinpitt
Copy link
Contributor

/kind bug

Description

Integration tests from our infra project on GitHub workflows have started to fail due to a pdoman regression. This can be reproduced with:

$ sudo podman run -it --rm docker.io/library/rabbitmq:3-management                                         
Error: error streaming container content for copy up into volume 3792b5d3eab37c1eb2c7f4bdbc2c48f2d9a77dea7b2d75a6321805e3af466aa1: copier: get: globs [/rabbitmq] matched nothing (0 filtered out): no such file or directory

This only happens on some images (in particular this RabbitMQ one); e.g. docker.io/fedora:latest or quay.io/cockpit/tasks etc. are fine.

This is an Ubuntu 20.04 host (GitHub workflows only offers Ubuntu 18.04 or 20.04) with the semi-official kubic packages.

This uses system podman, as rootless podman does not work very well in that environment -- e.g. the kubic packages don't pull in slirp4network or fuse-overlayfs, so networking is broken by default and performance is very poor.

But interestingly this does work with rootless podman in this environment, the container starts and I see the RabbitMQ messages. This is surprising, the above sounds like a bug in the layer unpacking/handling, which at first sight should not be very dependent on system vs. rootless?

On my current F34 workstation I have podman-3.0.0-0.204.dev.gita086f60.fc34.x86_64 where this works fine. It's a slightly older version. I'll try this again in an F34 VM with latest podman.

Output of podman version:

Version:      3.0.0
API Version:  3.0.0
Go Version:   go1.15.2
Built:        Thu Jan  1 00:00:00 1970
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.19.2
  cgroupManager: systemd
  cgroupVersion: v1
  conmon:
    package: 'conmon: /usr/libexec/podman/conmon'
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.0.26, commit: '
  cpus: 2
  distribution:
    distribution: ubuntu
    version: "20.04"
  eventLogger: journald
  hostname: fv-az118-501
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 5.4.0-1039-azure
  linkmode: dynamic
  memFree: 4601589760
  memTotal: 7292186624
  ociRuntime:
    name: crun
    package: 'crun: /usr/bin/crun'
    path: /usr/bin/crun
    version: |-
      crun version 0.17.6-58ef-dirty
      commit: fd582c529489c0738e7039cbc036781d1d039014
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  remoteSocket:
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: true
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    selinuxEnabled: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 4294963200
  swapTotal: 4294963200
  uptime: 13m 22.31s
registries:
  search:
  - docker.io
  - quay.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageStore:
    number: 3
  runRoot: /run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 3.0.0
  Built: 0
  BuiltTime: Thu Jan  1 00:00:00 1970
  GitCommit: ""
  GoVersion: go1.15.2
  OsArch: linux/amd64
  Version: 3.0.0

Package info (e.g. output of rpm -q podman or apt list podman):

podman/unknown,now 100:3.0.0-4 amd64 [installed]

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?

Yes

Additional environment details (AWS, VirtualBox, physical, etc.):

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Feb 19, 2021
@martinpitt
Copy link
Contributor Author

I found a workaround:

sudo podman run -it --tmpfs /var/lib/rabbitmq --rm docker.io/library/rabbitmq:3-management

This avoids creating an on-disk volume for /var/lib/rabbitmq, and somehow circumvents the problem.

martinpitt added a commit to cockpit-project/cockpituous that referenced this issue Feb 19, 2021
Current podman 3.0.0 in system mode has a regression on Ubuntu hosts
with handling anonymous volumes, see
containers/podman#9432 for details.

Work around this by placing RabbitMQ's data into a tmpfs volume.
@mheon
Copy link
Member

mheon commented Feb 19, 2021

Dupe of #9393, closing. Fixed in 3.0.1

@mheon mheon closed this as completed Feb 19, 2021
martinpitt added a commit to cockpit-project/cockpituous that referenced this issue Feb 19, 2021
Current podman 3.0.0 in system mode has a regression with handling
anonymous volumes. See containers/podman#9432
for details.

Work around this by placing RabbitMQ's data into a tmpfs volume.
martinpitt added a commit to cockpit-project/cockpituous that referenced this issue Feb 20, 2021
Current podman 3.0.0 in system mode has a regression with handling
anonymous volumes. See containers/podman#9432
for details.

Work around this by placing RabbitMQ's data into a tmpfs volume.
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 22, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 22, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests

3 participants