-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rhel78: mountpoint for devices not found #7004
Comments
$ podman unshare cat /proc/self/uid_map |
This looks like it is trying to apply cgroups in a rootless environment? |
|
@giuseppe Any thoughts on what is going on, looks like it is trying to use cgroups in rootless containers on RHEL7? |
the error is coming from runc, we should not be setting any devices cgroup and I've never seen that error before. Can you show me the output for |
|
Additional details on this. ERROR:Rootless
DETAILS ON FAILING ENVIRONMENT:The failing node is a physical HPE server. Running RHEL 7.8:
Realtime OS with default 5% CPU time reserved for non-deadline (non-RT) processes. There is one RT process running, an HPE Serviceguard config provider called
Docker is installed:
Podman is installed, and container runtime is runc:
Conmon:
Only cgroups v1 is available/enabled. All expected cgroups are mounted:
libpod.conf attached: ROOTED
|
the systemd driver is not supported for rootless on cgroup v1. You should use cgroupfs. Have you logged in using the vagrant user? Please show me what is the output for these commands:
does it work if you force cgroupfs ( |
Yes, cgroupfs is being used in rootless scenario, as it should be. Debug log: The failing HPE environment is RHEL on a physical server - no virtualization. My local repro environment where rootless podman run succeds is a vagrant VM on virtualbox, but I am not logged in as vagrant. This is the login sequence I take on my local VM (edited to fix sequence):
I honestly don't know if the question of why systemd is tracking the process in the wrong user slice is relevant to this mountpoint not found error. Given that this name cgroup has no actual controllers, it may be a red herring. On the HP box where rootless
Strace attached: |
I think runc might get confused by
Do you know where the |
Hmm. I do not. |
@giuseppe Thanks for this pointer, Red Hat has verified that having this errant cgroup mounted does in fact cause this error when running rootless.
On this now-failing repro env, unmounting errant cgroup(s) and running a Not sure if this has, or will, ever come up outside edge cases like ours. I don't think there's any "fix" needed here as I believe it is reasonable to expect that the systemd init system is the only author into a single V1 cgroup hierarchy - Edited for clarity: deviations from default controller location should be considered violations even when not managed by systemd. I've got an internal thread to find out what happens on in-place upgrade of supported-path RHEL6 (pre-systemd - libcgroup mounted cgroups at |
@a-trout-in-the-milk thanks for confirming it. I don't think such edge cases are going to be addressed in RHEL 7 anyway. Feel free to include me in any Red Hat discussion on the problem you are having, but let's close the issue here as it cannot be addressed anyway in future versions of Podman. For cgroup v2, we are already assuming all over the stack that cgroups are mounted at |
@giuseppe do you know if this is still an issue with RHEL8, i.e., if rootless podman has problems with the two mounted cpusets? |
Could you try if it works for you with crun instead of runc? crun is available in RHEL 8 |
Unfortunately, we won't be moving to RHEL8 for a few months so I am unable to try your suggestion. However, I found a workaround. If I unmount /dev/cpuset, and then remount it, rootless Podman works. It appears the relative order of /sys/fs/cgroup/cpuset and /dev/cpuset in /proc/self/mountinfo matters. |
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
podman run with a non-root user does not work. The operating system is rhel78. slirp4netns and podman are installed by yum. Also
user.max_user_namespaces=28633
is configured on the system.Steps to reproduce the issue:
Describe the results you received:
Describe the results you expected:
Additional information you deem important (e.g. issue happens only occasionally):
It works with root user. Also, runc works with a non-root user. The following is a part of debug log of the podman:
Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Additional environment details (AWS, VirtualBox, physical, etc.):
physical server
The text was updated successfully, but these errors were encountered: