Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[runc] podman run --kernel-memory=... not respected #12045

Closed
jerboaa opened this issue Oct 20, 2021 · 7 comments · Fixed by #12048
Closed

[runc] podman run --kernel-memory=... not respected #12045

jerboaa opened this issue Oct 20, 2021 · 7 comments · Fixed by #12048
Labels
locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. Question Issue is a question about Podman

Comments

@jerboaa
Copy link

jerboaa commented Oct 20, 2021

/kind bug

Description
Kernel memory settings don't populate to cgroup fs on aarch64. It works fine using crun as runtime (non-default).

Steps to reproduce the issue:

# uname -m
aarch64
# whoami
root
# podman run --kernel-memory=100m --rm -ti fedora:34 cat sys/fs/cgroup/memory/memory.kmem.limit_in_bytes
9223372036854710272

Describe the results you received:
9223372036854710272 (unlimited) on cgroups v1

Describe the results you expected:
104857600

Additional information you deem important (e.g. issue happens only occasionally):
Works fine with crun as runtime. runc is the default, though.

# podman run --runtime /usr/bin/crun --kernel-memory=100m --rm -ti fedora:34 cat sys/fs/cgroup/memory/memory.kmem.limit_in_bytes
104857600

Output of podman version:

Version:      3.2.3
API Version:  3.2.3
Go Version:   go1.15.7
Built:        Tue Jul 27 03:30:08 2021
OS/Arch:      linux/arm64

Output of podman info --debug:

host:
  arch: arm64
  buildahVersion: 1.21.3
  cgroupControllers:
  - cpuset
  - cpu
  - cpuacct
  - blkio
  - memory
  - devices
  - freezer
  - net_cls
  - perf_event
  - net_prio
  - hugetlb
  - pids
  - rdma
  cgroupManager: systemd
  cgroupVersion: v1
  conmon:
    package: conmon-2.0.29-1.module+el8.4.0+11822+6cc1e7d7.aarch64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.29, commit: 6c00f196e70c84a22f57f61792e879cb37b029ea'
  cpus: 4
  distribution:
    distribution: '"rhel"'
    version: "8.4"
  eventLogger: file
  hostname: <redacted>
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 4.18.0-305.16.1.el8_4.aarch64
  linkmode: dynamic
  memFree: 1779171328
  memTotal: 5846401024
  ociRuntime:
    name: runc
    package: runc-1.0.0-74.rc95.module+el8.4.0+11822+6cc1e7d7.aarch64
    path: /usr/bin/runc
    version: |-
      runc version spec: 1.0.2-dev
      go: go1.15.13
      libseccomp: 2.5.1
  os: linux
  remoteSocket:
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 6450774016
  swapTotal: 6450774016
  uptime: 2h 26m 26.04s (Approximately 0.08 days)
registries:
  search:
  - registry.access.redhat.com
  - registry.redhat.io
  - docker.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageStore:
    number: 1
  runRoot: /run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 3.2.3
  Built: 1627371008
  BuiltTime: Tue Jul 27 03:30:08 2021
  GitCommit: ""
  GoVersion: go1.15.7
  OsArch: linux/arm64
  Version: 3.2.3

Package info:

$ rpm -q podman
podman-3.2.3-0.10.module+el8.4.0+11989+6676f7ad.aarch64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/master/troubleshooting.md)

Yes (to troubleshooting guide). No to latest version (as that is difficult for me on aarch64).

Additional environment details (AWS, VirtualBox, physical, etc.):
physical.

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Oct 20, 2021
@jerboaa jerboaa changed the title [runc] podman run --kernel-memory=... not respected on aarch64 [runc] podman run --kernel-memory=... not respected Oct 20, 2021
@jerboaa
Copy link
Author

jerboaa commented Oct 20, 2021

This is actually reproducible on Fedora 34 on x86_64 too when using runc as runtime:

# uname -m
x86_64
# whoami
root
# podman run --runtime /usr/bin/runc --kernel-memory=100m --rm -ti fedora:34 cat sys/fs/cgroup/memory/memory.kmem.limit_in_bytes
9223372036854771712
# podman run --runtime /usr/bin/crun --kernel-memory=100m --rm -ti fedora:34 cat sys/fs/cgroup/memory/memory.kmem.limit_in_bytes
104857600
# rpm -q runc
runc-1.0.2-2.fc34.x86_64

@jerboaa
Copy link
Author

jerboaa commented Oct 20, 2021

Workaround is to set runtime = "crun" in containers.conf.

@AkihiroSuda
Copy link
Collaborator

This has been deprecated , and runc dropped support for --kernel-memory in rc94
opencontainers/runc#2840

So, the ignorance of --kernel-memory is the expected behavior for runc.

@AkihiroSuda AkihiroSuda added Question Issue is a question about Podman and removed kind/bug Categorizes issue or PR as related to a bug. labels Oct 20, 2021
@mheon
Copy link
Member

mheon commented Oct 20, 2021

We may want to update Podman to warn on use of unsupported limits, though we'd have to have a mechanism for identifying which runtime supports what. Could be sizable depending on how much variation there is.

@rhatdan
Copy link
Member

rhatdan commented Oct 20, 2021

@AkihiroSuda @giuseppe Why is this depracated, is the cgroup support no good? Should crun also drop support. Should we depracate and hide the option?

@rhatdan
Copy link
Member

rhatdan commented Oct 20, 2021

I hate having options that say, Don't touch this. Because human instinct is to touch it...

@AkihiroSuda
Copy link
Collaborator

Why is this depracated, is the cgroup support no good? Should crun also drop support.

From opencontainers/runc#2840 :

Per-cgroup kernel memory limiting was always problematic. A few examples:

  • older kernels had bugs and were even oopsing sometimes (best example
    is RHEL7 kernel);
  • kernel is unable to reclaim the kernel memory so once the limit is
    hit a cgroup is toasted;
  • some kernel memory allocations don't allow failing.

In addition to that,

  • users don't have a clue about how to set kernel memory limits
    (as the concept is much more complicated than e.g. [user] memory);
  • different kernels might have different kernel memory usage,
    which is sort of unexpected;
  • cgroup v2 do not have a [dedicated] kmem limit knob, and thus
    runc silently ignores kernel memory limits for v2;
  • kernel v5.4 made cgroup v1 kmem.limit obsoleted (see
    torvalds/linux@0158115).

Should crun also drop support. Should we depracate and hide the option?

Yes, the runtime spec also recommends not to support kernel memory

https://github.com/opencontainers/runtime-spec/pull/1093/files

rhatdan added a commit to rhatdan/podman that referenced this issue Oct 21, 2021
Kernel memory option has been depracated in runtime-spec,  It is
believed that it will not work properly on certain kernels.  runc
ignores it.

This PR removes documentation of the flag and also prints a warning if
a user uses it.

[NO NEW TESTS NEEDED]

Helps Fix: containers#12045

Signed-off-by: Daniel J Walsh <[email protected]>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 21, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. Question Issue is a question about Podman
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants