Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman-remote exec does not emit the exec_died event #20188

Closed
cgiradkar opened this issue Sep 28, 2023 · 2 comments
Closed

podman-remote exec does not emit the exec_died event #20188

cgiradkar opened this issue Sep 28, 2023 · 2 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@cgiradkar
Copy link
Contributor

Issue Description

When testing the local-integration using make localintegration in podman-remote, the test "podman healthcheck single healthy result changes failed to healthy" fails. This means that the exec-died event is not emitted while in remote mode.

Steps to reproduce the issue

Steps to reproduce the issue

  1. terminal 1: bin/podman system service -t0
  2. bin/podman events --filter event=exec_died --filter event=exec
  3. bin/podman run -d --name c1 alpine sleep 1000
  4. bin/podman-remote exec c1 true
  5. bin/podman exec c1 true

Describe the results you received

Check terminal 2 events, you see both the exec and exec_died event. if you use podman-remote it does not show them, if you use regular podman, it does it correctly.

Describe the results you expected

Get both exec and exec-died events in podman-remote exec

podman info output

host:
  arch: amd64
  buildahVersion: 1.32.0
  cgroupControllers: []
  cgroupManager: cgroupfs
  cgroupVersion: v1
  conmon:
    package: conmon-2.1.4-1.module+el8.7.0+17824+66a0202b.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.4, commit: 3af09f8ff9780bd806d2e2db2a8518250622d362'
  cpuUtilization:
    idlePercent: 92.82
    systemPercent: 1.9
    userPercent: 5.28
  cpus: 8
  databaseBackend: boltdb
  distribution:
    distribution: rhel
    version: "8.4"
  eventLogger: file
  freeLocks: 2048
  hostname: cgiradka.wat.csb
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 4213400
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 4213400
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
  kernel: 4.18.0-425.19.2.el8_7.x86_64
  linkmode: dynamic
  logDriver: k8s-file
  memFree: 3119435776
  memTotal: 33197223936
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.5.0-2.module+el8.8.0+18060+3f21f2cc.x86_64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.5.0
    package: netavark-1.5.1-2.module+el8.8.0+19031+df0566b7.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.5.0
  ociRuntime:
    name: runc
    package: runc-1.1.4-1.module+el8.7.0+17824+66a0202b.x86_64
    path: /usr/bin/runc
    version: |-
      runc version 1.1.4
      spec: 1.0.2-dev
      go: go1.18.4
      libseccomp: 2.5.2
  os: linux
  pasta:
    executable: ""
    package: ""
    version: ""
  remoteSocket:
    exists: false
    path: /run/user/4213400/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.0-2.module+el8.7.0+17824+66a0202b.x86_64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.4.0
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.2
  swapFree: 8582328320
  swapTotal: 8589930496
  uptime: 49h 46m 8.00s (Approximately 2.04 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.access.redhat.com
  - registry.redhat.io
  - docker.io
store:
  configFile: /home/cgiradka/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/cgiradka/.local/share/containers/storage
  graphRootAllocated: 107368579072
  graphRootUsed: 44658200576
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 3
  runRoot: /run/user/4213400/containers
  transientStore: false
  volumePath: /home/cgiradka/.local/share/containers/storage/volumes
version:
  APIVersion: 4.8.0-dev
  Built: 1695906424
  BuiltTime: Thu Sep 28 14:07:04 2023
  GitCommit: 2b38742d1242294f23a3382a21a138e240725b6d-dirty
  GoVersion: go1.19.10
  Os: linux
  OsArch: linux/amd64
  Version: 4.8.0-dev

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

Yes

Additional environment details

Additional environment details

Additional information

Additional information like issue happens only occasionally or issue happens with a particular architecture or on a particular setting

@cgiradkar cgiradkar added the kind/bug Categorizes issue or PR as related to a bug. label Sep 28, 2023
@flouthoc
Copy link
Collaborator

Hi @cgiradkar , I am unable to reproduce this issue from upstream podman with the reproducer you have provided , I get events from both podman and podman-remote

$ ./podman events --filter event=exec_died --filter event=exec
2023-09-29 11:53:44.643679537 +0530 IST container exec ea18db55ef9e456220cc8fa6813a82f335e5cc65face0880c03f8ec8b1a7a466 (image=docker.io/library/alpine:latest, name=c1)
2023-09-29 11:53:44.677333555 +0530 IST container exec_died ea18db55ef9e456220cc8fa6813a82f335e5cc65face0880c03f8ec8b1a7a466 (image=docker.io/library/alpine:latest, name=c1)


2023-09-29 11:53:56.457323678 +0530 IST container exec ea18db55ef9e456220cc8fa6813a82f335e5cc65face0880c03f8ec8b1a7a466 (image=docker.io/library/alpine:latest, name=c1)
2023-09-29 11:53:56.507199495 +0530 IST container exec_died ea18db55ef9e456220cc8fa6813a82f335e5cc65face0880c03f8ec8b1a7a466 (image=docker.io/library/alpine:latest, name=c1)
2023-09-29 11:53:56.512254564 +0530 IST container exec_died ea18db55ef9e456220cc8fa6813a82f335e5cc65face0880c03f8ec8b1a7a466 (image=docker.io/library/alpine:latest, name=c1)

@Luap99
Copy link
Member

Luap99 commented Sep 29, 2023

Yes it seems like this is only from the PR #20132, I tested the wrong binary yesterday when I check it.

@Luap99 Luap99 closed this as not planned Won't fix, can't repro, duplicate, stale Sep 29, 2023
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Dec 29, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Dec 29, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests

3 participants