-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SELinux denials when using z on a shared cache directory with many containers #20237
Comments
Also, setting |
Could you please show the output of the command?
|
@flouthoc I am not 100% sure how Windmill is doing their nsjail stuff but I can try to see if I can get a single podman command to reproduce.
@giuseppe I assume you meant for an
|
sorry, yes I meant "ls -dZ" the label seems correct after you command, does it change at runtime? is the same label you've for |
It seems to have the wrong label on the file. # ls -dZ storage/jobs/windmill/worker-cache/pip/httpx==0.24.1/httpx/__pycache__/__init__.cpython-311.pyc
system_u:object_r:container_file_t:s0:c922,c989 'storage/jobs/windmill/worker-cache/pip/httpx==0.24.1/httpx/__pycache__/__init__.cpython-311.pyc'
# podman run --rm -v ./storage/jobs/windmill/worker-cache:/tmp/windmill/cache:z fedora ls -dZ /tmp/windmill/cache/pip/httpx==0.24.1/httpx/__pycache__/__init__.cpython-311.pyc
ls: cannot access '/tmp/windmill/cache/pip/httpx==0.24.1/httpx/__pycache__/__init__.cpython-311.pyc': Permission denied |
were these files created by a container that didn't have :z? I am not able to reproduce the issue locally So the only way I've found to end up with your configuration is to mount a subdirectory of the volume to a different container with ":Z". e.g. run something like Is it something you've used? Please share the complete command line you've used, was the volume empty when you've first passed it to a container? |
I think that you originally ran the command with a ":Z" command which labeled it specific for a single container. Now you are labeling it ":z" which should change the label to a common label, but there is a optimization that is checking to see if the top level directory is labeled container_file_t, and not doing the relabel (I am educated guessing). If you run $ restorecon -R -F -v /storage/jobs/worker-cache/pip |
Then again, it does not seem to happen for me.
|
What podman command are you running to label the Directory? |
Bingo @giuseppe!! So I used to use the parent directory
@rhatdan's restorecon command above on # ls -dZ storage/jobs/windmill/worker-cache/pip/httpx==0.24.1/httpx/__pycache__/__init__.cpython-311.pyc
system_u:object_r:container_file_t:s0 'storage/jobs/windmill/worker-cache/pip/httpx==0.24.1/httpx/__pycache__/__init__.cpython-311.pyc' Thanks a lot everyone and sorry for the goose chase. You all rock! Hopefully this issue will help someone in the future that ends up in the same situation. |
I think I spoke too soon. When I ran @rhatdan's restorecon that was a temporary fix. Any new file that is created by the workload inside the container gets the wrong context still. You can see that directories are created with the correct context # setenforce 0
# podman run --rm -v ./worker-cache:/tmp/windmill/cache:z fedora ls -lahZ /tmp/windmill/cache/pip
total 0
drwxr-xr-x. 9 root root system_u:object_r:container_file_t:s0 155 Oct 4 05:22 .
drwxr-xr-x. 9 root root system_u:object_r:container_file_t:s0 86 Oct 4 05:21 ..
drwxr-xr-x. 4 root root system_u:object_r:container_file_t:s0 48 Oct 4 05:22 anyio==4.0.0
drwxr-xr-x. 4 root root system_u:object_r:container_file_t:s0 56 Oct 4 05:22 certifi==2023.7.22
drwxr-xr-x. 4 root root system_u:object_r:container_file_t:s0 45 Oct 4 05:22 h11==0.14.0
drwxr-xr-x. 4 root root system_u:object_r:container_file_t:s0 55 Oct 4 05:22 httpcore==0.18.0
drwxr-xr-x. 5 root root system_u:object_r:container_file_t:s0 60 Oct 4 05:22 httpx==0.25.0
drwxr-xr-x. 4 root root system_u:object_r:container_file_t:s0 44 Oct 4 05:22 idna==3.4
drwxr-xr-x. 4 root root system_u:object_r:container_file_t:s0 52 Oct 4 05:22 sniffio==1.3.0 But files have this additional context on them: # podman run --rm -v ./worker-cache:/tmp/windmill/cache:z fedora ls -lahZ /tmp/windmill/cache/pip/sniffio==1.3.0/sniffio/_tests/__pycache__/__init__.cpython-311.pyc
-rw-r--r--. 1 root root system_u:object_r:container_file_t:s0:c4,c254 171 Oct 4 05:47 '/tmp/windmill/cache/pip/sniffio==1.3.0/sniffio/_tests/__pycache__/__init__.cpython-311.pyc' Enable SELinux and you cannot access the file due to having additional context # setenforce 1
# podman run --rm -v ./worker-cache:/tmp/windmill/cache:z fedora ls -lahZ /tmp/windmill/cache/pip/sniffio==1.3.0/sniffio/_tests/__pycache__/__init__.cpython-311.pyc
ls: cannot access '/tmp/windmill/cache/pip/sniffio==1.3.0/sniffio/_tests/__pycache__/__init__.cpython-311.pyc': Permission denied Creating a file manually with another container sets the correct context # podman run --rm -v ./worker-cache:/tmp/windmill/cache:z fedora touch /tmp/windmill/cache/pip/joetest
# podman run --rm -v ./worker-cache:/tmp/windmill/cache:z fedora ls -lahZ /tmp/windmill/cache/pip/joetest
-rw-r--r--. 1 root root system_u:object_r:container_file_t:s0 0 Oct 4 05:33 /tmp/windmill/cache/pip/joetest Entering into the container that created the files it has the following context root@689f20478830:/tmp/windmill/cache# ls -lahZ /
total 63M
dr-xr-xr-x. 1 root root system_u:object_r:container_file_t:s0:c4,c254 103 Oct 4 05:46 .
dr-xr-xr-x. 1 root root system_u:object_r:container_file_t:s0:c4,c254 103 Oct 4 05:46 ..
drwxrwxrwx. 5 99 99 system_u:object_r:container_file_t:s0 41 Oct 4 05:46 alloc
drwxr-xr-x. 3 root root system_u:object_r:container_file_t:s0:c4,c254 78 Aug 9 16:47 aws
drwxr-xr-x. 1 root root system_u:object_r:container_file_t:s0:c4,c254 20 Sep 30 07:13 bin
drwxr-xr-x. 2 root root system_u:object_r:container_file_t:s0:c4,c254 6 Sep 3 2022 boot
drwxr-xr-x. 5 root root system_u:object_r:container_file_t:s0:c4,c254 340 Oct 4 05:46 dev
drwxr-xr-x. 1 root root system_u:object_r:container_file_t:s0:c4,c254 42 Oct 4 05:46 etc And it can read the files it created just fine: root@689f20478830:~# ls -lahZ /tmp/windmill/cache/pip/sniffio==1.3.0/sniffio/_tests/__pycache__/__init__.cpython-311.pyc
-rw-r--r--. 1 root root system_u:object_r:container_file_t:s0:c4,c254 171 Oct 4 05:47 '/tmp/windmill/cache/pip/sniffio==1.3.0/sniffio/_tests/__pycache__/__init__.cpython-311.pyc' But any other container will get SELinux denials that share this volume mount because they cannot read a context of Looking into how Windmill handles pip downloads I found it's script which effectively runs the pip install command below. root@689f20478830:/tmp/windmill/wk-689f20478830-kyW2a# ls -dZ /tmp/windmill/wk-689f20478830-kyW2a
system_u:object_r:container_file_t:s0:c4,c254 /tmp/windmill/wk-689f20478830-kyW2a
root@689f20478830:/tmp/windmill/wk-689f20478830-kyW2a# /usr/local/bin/python3 -m pip install -v bupy -I -t ../cache --no-cache --no-color --no-deps --isolated --no-warn-conflicts --disable-pip-version-check
Using pip 23.1.2 from /usr/local/lib/python3.11/site-packages/pip (python 3.11)
Collecting bupy
Downloading bupy-0.1.2-py3-none-any.whl (15 kB)
Installing collected packages: bupy
Creating /tmp/pip-target-203yrywv/bin
changing mode of /tmp/pip-target-203yrywv/bin/bupy to 755
Successfully installed bupy-0.1.2
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
root@689f20478830:/tmp/windmill/wk-689f20478830-kyW2a# ls -lahZ ../cache/bupy/
total 52K
drwxr-xr-x. 3 root root system_u:object_r:container_file_t:s0:c4,c254 158 Oct 4 06:16 .
drwxr-xr-x. 12 root root system_u:object_r:container_file_t:s0 148 Oct 4 06:16 ..
-rw-r--r--. 1 root root system_u:object_r:container_file_t:s0:c4,c254 44 Oct 4 06:16 __init__.py
-rw-r--r--. 1 root root system_u:object_r:container_file_t:s0:c4,c254 125 Oct 4 06:16 __main__.py
drwxr-xr-x. 2 root root system_u:object_r:container_file_t:s0:c4,c254 4.0K Oct 4 06:16 __pycache__
-rw-r--r--. 1 root root system_u:object_r:container_file_t:s0:c4,c254 1.2K Oct 4 06:16 butane.py
-rw-r--r--. 1 root root system_u:object_r:container_file_t:s0:c4,c254 8.1K Oct 4 06:16 cli.py
-rw-r--r--. 1 root root system_u:object_r:container_file_t:s0:c4,c254 5.0K Oct 4 06:16 fcos.py
-rw-r--r--. 1 root root system_u:object_r:container_file_t:s0:c4,c254 3.3K Oct 4 06:16 qemu.py
-rw-r--r--. 1 root root system_u:object_r:container_file_t:s0:c4,c254 5.3K Oct 4 06:16 template.py
-rw-r--r--. 1 root root system_u:object_r:container_file_t:s0:c4,c254 1.1K Oct 4 06:16 util.py Pip by default uses /tmp as a workdir to unpack Python packages and then it moves those files into the target directory. This brings the container context If you set a temp root@689f20478830:/tmp/windmill/cache# TMPDIR=/tmp/windmill/cache/tmp /usr/local/bin/python3 -m pip install -v bupy -I -t ./pip --no-cache --no-color --no-deps --isolated --no-warn-conflicts --disable-pip-version-check
Using pip 23.1.2 from /usr/local/lib/python3.11/site-packages/pip (python 3.11)
Collecting bupy
Downloading bupy-0.1.2-py3-none-any.whl (15 kB)
Installing collected packages: bupy
Creating /tmp/windmill/cache/tmp/pip-target-p_2fyf4u/bin
changing mode of /tmp/windmill/cache/tmp/pip-target-p_2fyf4u/bin/bupy to 755
Successfully installed bupy-0.1.2
WARNING: Target directory /tmp/windmill/cache/pip/bin already exists. Specify --upgrade to force replacement.
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
root@689f20478830:/tmp/windmill/cache# ls -lahZ pip/bupy
total 56K
drwxr-xr-x. 3 root root system_u:object_r:container_file_t:s0 158 Oct 4 06:26 .
drwxr-xr-x. 12 root root system_u:object_r:container_file_t:s0 4.0K Oct 4 06:26 ..
-rw-r--r--. 1 root root system_u:object_r:container_file_t:s0 44 Oct 4 06:26 __init__.py
-rw-r--r--. 1 root root system_u:object_r:container_file_t:s0 125 Oct 4 06:26 __main__.py
drwxr-xr-x. 2 root root system_u:object_r:container_file_t:s0 4.0K Oct 4 06:26 __pycache__
-rw-r--r--. 1 root root system_u:object_r:container_file_t:s0 1.2K Oct 4 06:26 butane.py
-rw-r--r--. 1 root root system_u:object_r:container_file_t:s0 8.1K Oct 4 06:26 cli.py
-rw-r--r--. 1 root root system_u:object_r:container_file_t:s0 5.0K Oct 4 06:26 fcos.py
-rw-r--r--. 1 root root system_u:object_r:container_file_t:s0 3.3K Oct 4 06:26 qemu.py
-rw-r--r--. 1 root root system_u:object_r:container_file_t:s0 5.3K Oct 4 06:26 template.py
-rw-r--r--. 1 root root system_u:object_r:container_file_t:s0 1.1K Oct 4 06:26 util.py
I will open an issue on the Windmill project to see how they want to handle this problem. I wanted to post my findings to get some closure on another SELinux adventure. I don't think this is a Podman specific issue and if you agree, feel free to close this issue. |
yes, I think this is not a podman issue since the command sets the label, thanks for investigating it. Another option could be to force all these containers to run with the same label |
That is an awesome idea. I can set those easily in the Nomad podman driver config for these workers. I submitted a PR to the Windmill project that makes pip use a tmp directory inside the volume mount. If they don't accept that PR, I will use this. Thanks a ton @giuseppe! |
Errrm that context breaks the container from being able to write to itself.
Any ideas @giuseppe ? |
Setting |
Issue Description
I have a handful of worker containers that share a dependency cache on the same server. On a fresh compute node, the first worker gets a job which runs my python code in an nsjail inside of the container. As part of that it downloads and caches all of the needed Python modules into a directory that is mounted
The first worker can write to /tmp/windmill/cache just fine. If I run the job again and it happens to run on a different worker container I get a bunch of AVC denial errors.
If I continue the job it will eventually hit the worker container that initially downloaded the Python modules and the code runs just fine as it can read the modules without SELinux drama.
Steps to reproduce the issue
Run windmill.dev with Podman and SELinux turned on.
Describe the results you received
Odd SELinux denials.
Describe the results you expected
Amazing Python code execution 100% of the time.
podman info output
Podman in a container
No
Privileged Or Rootless
None
Upstream Latest Release
Yes
Additional environment details
Windmill uses nsjail inside of their worker containers to isolate code execution. This might be a factor here.
Additional information
Additional information like issue happens only occasionally or issue happens with a particular architecture or on a particular setting
The text was updated successfully, but these errors were encountered: