Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SELinux denials when using z on a shared cache directory with many containers #20237

Closed
jdoss opened this issue Oct 3, 2023 · 16 comments
Closed
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@jdoss
Copy link
Contributor

jdoss commented Oct 3, 2023

Issue Description

I have a handful of worker containers that share a dependency cache on the same server. On a fresh compute node, the first worker gets a job which runs my python code in an nsjail inside of the container. As part of that it downloads and caches all of the needed Python modules into a directory that is mounted

/storage/jobs/worker-cache:/tmp/windmill/cache:z

The first worker can write to /tmp/windmill/cache just fine. If I run the job again and it happens to run on a different worker container I get a bunch of AVC denial errors.

# journalctl -f --no-pager --no-hostname | grep enied
Oct 03 00:42:31 audit[18449]: AVC avc:  denied  { read } for  pid=18449 comm="python3" name="__init__.cpython-311.pyc" dev="md124" ino=1073761653 scontext=system_u:system_r:container_t:s0:c445,c507 tcontext=system_u:object_r:container_file_t:s0:c922,c989 tclass=file permissive=1
Oct 03 00:42:31 kernel: audit: type=1400 audit(1696293751.072:922): avc:  denied  { read } for  pid=18449 comm="python3" name="__init__.cpython-311.pyc" dev="md124" ino=1073761653 scontext=system_u:system_r:container_t:s0:c445,c507 tcontext=system_u:object_r:container_file_t:s0:c922,c989 tclass=file permissive=1
Oct 03 00:42:31 kernel: audit: type=1400 audit(1696293751.073:923): avc:  denied  { ioctl } for  pid=18449 comm="python3" path="/tmp/windmill/cache/pip/httpx==0.24.1/httpx/__pycache__/__init__.cpython-311.pyc" dev="md124" ino=1073761653 ioctlcmd=0x5401 scontext=system_u:system_r:container_t:s0:c445,c507 tcontext=system_u:object_r:container_file_t:s0:c922,c989 tclass=file permissive=1
Oct 03 00:42:31 audit[18449]: AVC avc:  denied  { ioctl } for  pid=18449 comm="python3" path="/tmp/windmill/cache/pip/httpx==0.24.1/httpx/__pycache__/__init__.cpython-311.pyc" dev="md124" ino=1073761653 ioctlcmd=0x5401 scontext=system_u:system_r:container_t:s0:c445,c507 tcontext=system_u:object_r:container_file_t:s0:c922,c989 tclass=file permissive=1

If I continue the job it will eventually hit the worker container that initially downloaded the Python modules and the code runs just fine as it can read the modules without SELinux drama.

Steps to reproduce the issue

Run windmill.dev with Podman and SELinux turned on.

Describe the results you received

Odd SELinux denials.

Describe the results you expected

Amazing Python code execution 100% of the time.

podman info output

podman info
host:
  arch: amd64
  buildahVersion: 1.31.2
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - hugetlb
  - pids
  - rdma
  - misc
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.7-2.fc38.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.7, commit: '
  cpuUtilization:
    idlePercent: 97
    systemPercent: 1.27
    userPercent: 1.73
  cpus: 20
  databaseBackend: boltdb
  distribution:
    distribution: fedora
    variant: coreos
    version: "38"
  eventLogger: journald
  freeLocks: 1915
  hostname: compute-2.quickvm.com
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 6.4.15-200.fc38.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 24578404352
  memTotal: 33643163648
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.7.0-1.fc38.x86_64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.7.0
    package: netavark-1.7.0-1.fc38.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.7.0
  ociRuntime:
    name: crun
    package: crun-1.8.7-1.fc38.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.8.7
      commit: 53a9996ce82d1ee818349bdcc64797a1fa0433c4
      rundir: /run/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt-0^20230823.ga7e4bfb-1.fc38.x86_64
    version: |
      pasta 0^20230823.ga7e4bfb-1.fc38.x86_64
      Copyright Red Hat
      GNU Affero GPL version 3 or later <https://www.gnu.org/licenses/agpl-3.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.1-1.fc38.x86_64
    version: |-
      slirp4netns version 1.2.1
      commit: 09e31e92fa3d2a1d3ca261adaeb012c8d75a8194
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.3
  swapFree: 3913019392
  swapTotal: 4294963200
  uptime: 1h 23m 15.00s (Approximately 0.04 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /usr/share/containers/storage.conf
  containerStore:
    number: 26
    paused: 0
    running: 26
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphRootAllocated: 321830064128
  graphRootUsed: 36792999936
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 53
  runRoot: /run/containers/storage
  transientStore: false
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 4.6.2
  Built: 1693251588
  BuiltTime: Mon Aug 28 19:39:48 2023
  GitCommit: ""
  GoVersion: go1.20.7
  Os: linux
  OsArch: linux/amd64
  Version: 4.6.2

Podman in a container

No

Privileged Or Rootless

None

Upstream Latest Release

Yes

Additional environment details

Windmill uses nsjail inside of their worker containers to isolate code execution. This might be a factor here.

Additional information

Additional information like issue happens only occasionally or issue happens with a particular architecture or on a particular setting

@jdoss jdoss added the kind/bug Categorizes issue or PR as related to a bug. label Oct 3, 2023
@jdoss
Copy link
Contributor Author

jdoss commented Oct 3, 2023

Also, setting selinux_opts = ["disable"] obviously works as a work around but grew up in the datacenter with the do not disable SELinux gospel being played 24/7 and I'd like this workload to enjoy the warm comforts that SELinux provides.

@flouthoc
Copy link
Collaborator

flouthoc commented Oct 3, 2023

@jdoss Is there any reproducer with simple podman commands ? @rhatdan Do you have any hints here ?

@giuseppe
Copy link
Member

giuseppe commented Oct 3, 2023

tcontext=system_u:object_r:container_file_t:s0:c922,c989 is the wrong label for a shared volume.

Could you please show the output of the command?

podman run --rm -v /storage/jobs/worker-cache:/tmp/windmill/cache:z fedora -dZ /tmp/windmill/cache

@jdoss
Copy link
Contributor Author

jdoss commented Oct 3, 2023

@jdoss Is there any reproducer with simple podman commands ? @rhatdan Do you have any hints here ?

@flouthoc I am not 100% sure how Windmill is doing their nsjail stuff but I can try to see if I can get a single podman command to reproduce.

tcontext=system_u:object_r:container_file_t:s0:c922,c989 is the wrong label for a shared volume.

Could you please show the output of the command?

podman run --rm -v /storage/jobs/worker-cache:/tmp/windmill/cache:z fedora -dZ /tmp/windmill/cache

@giuseppe I assume you meant for an ls to be in your command you wanted me to run? Here is the output:

# podman run --rm -v ./storage/jobs/windmill/worker-cache:/tmp/windmill/cache:z fedora ls -dZ /tmp/windmill/cache
system_u:object_r:container_file_t:s0 /tmp/windmill/cache

@giuseppe
Copy link
Member

giuseppe commented Oct 3, 2023

sorry, yes I meant "ls -dZ"

the label seems correct after you command, does it change at runtime?

is the same label you've for /tmp/windmill/cache/pip/httpx==0.24.1/httpx/__pycache__/__init__.cpython-311.pyc?

@jdoss
Copy link
Contributor Author

jdoss commented Oct 3, 2023

It seems to have the wrong label on the file.

# ls -dZ storage/jobs/windmill/worker-cache/pip/httpx==0.24.1/httpx/__pycache__/__init__.cpython-311.pyc
system_u:object_r:container_file_t:s0:c922,c989 'storage/jobs/windmill/worker-cache/pip/httpx==0.24.1/httpx/__pycache__/__init__.cpython-311.pyc'
# podman run --rm -v ./storage/jobs/windmill/worker-cache:/tmp/windmill/cache:z fedora ls -dZ /tmp/windmill/cache/pip/httpx==0.24.1/httpx/__pycache__/__init__.cpython-311.pyc
ls: cannot access '/tmp/windmill/cache/pip/httpx==0.24.1/httpx/__pycache__/__init__.cpython-311.pyc': Permission denied

@giuseppe
Copy link
Member

giuseppe commented Oct 3, 2023

were these files created by a container that didn't have :z?

I am not able to reproduce the issue locally :z always, it always applies the shared label if it mismatches.

So the only way I've found to end up with your configuration is to mount a subdirectory of the volume to a different container with ":Z". e.g. run something like /storage/jobs/worker-cache/pip:/tmp/windmill/cache/pip:Z because that would apply only to the subdirectory. So next time you use the parent directory with :z, Podman finds the right label and avoid entering the subdirectory.

Is it something you've used?

Please share the complete command line you've used, was the volume empty when you've first passed it to a container?

@rhatdan
Copy link
Member

rhatdan commented Oct 3, 2023

I think that you originally ran the command with a ":Z" command which labeled it specific for a single container. Now you are labeling it ":z" which should change the label to a common label, but there is a optimization that is checking to see if the top level directory is labeled container_file_t, and not doing the relabel (I am educated guessing). If you run

$ restorecon -R -F -v /storage/jobs/worker-cache/pip
To change the label to something other then container_file_t, then run the :z, I am pretty sure it should work.

@rhatdan
Copy link
Member

rhatdan commented Oct 3, 2023

Then again, it does not seem to happen for me.

$ podman run -v ./test1:/test1:Z fedora ls -lZd /test1
Resolved "fedora" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull registry.fedoraproject.org/fedora:latest...
Getting image source signatures
Copying blob 18ca996a454f done   | 
Copying config 72c9e45642 done   | 
Writing manifest to image destination
drwxr-xr-x. 2 root root system_u:object_r:container_file_t:s0:c560,c710 6 Oct  3 19:10 /test1
podman (read-only) $ podman run -v ./test1:/test1:z fedora ls -lZd /test1
drwxr-xr-x. 2 root root system_u:object_r:container_file_t:s0 6 Oct  3 19:10 /test1

@rhatdan
Copy link
Member

rhatdan commented Oct 3, 2023

What podman command are you running to label the Directory?

@jdoss
Copy link
Contributor Author

jdoss commented Oct 4, 2023

So the only way I've found to end up with your configuration is to mount a subdirectory of the volume to a different container with ":Z". e.g. run something like /storage/jobs/worker-cache/pip:/tmp/windmill/cache/pip:Z because that would apply only to the subdirectory. So next time you use the parent directory with :z, Podman finds the right label and avoid entering the subdirectory.

Is it something you've used?

Bingo @giuseppe!! So I used to use the parent directory storage/ as a host_volume in Nomad and the nomad podman driver lets you set z or Z by default for all volume mounts. I am sure in the past I used Z on storage/ and that was impacting my move to z because these workloads need to share a volume together. I should have nuked storage/ before I moved things around. Lesson learned.

I think that you originally ran the command with a ":Z" command which labeled it specific for a single container. Now you are labeling it ":z" which should change the label to a common label, but there is a optimization that is checking to see if the top level directory is labeled container_file_t, and not doing the relabel (I am educated guessing). If you run

$ restorecon -R -F -v /storage/jobs/worker-cache/pip To change the label to something other then container_file_t, then run the :z, I am pretty sure it should work.

@rhatdan's restorecon command above on storage/ relabled everything to system_u:object_r:var_t:s0 and after shutting down every Nomad job on my cluster and restarting them, my cache files have the right label now!

# ls -dZ storage/jobs/windmill/worker-cache/pip/httpx==0.24.1/httpx/__pycache__/__init__.cpython-311.pyc
system_u:object_r:container_file_t:s0 'storage/jobs/windmill/worker-cache/pip/httpx==0.24.1/httpx/__pycache__/__init__.cpython-311.pyc'

Thanks a lot everyone and sorry for the goose chase. You all rock! Hopefully this issue will help someone in the future that ends up in the same situation.

@jdoss jdoss closed this as completed Oct 4, 2023
@jdoss
Copy link
Contributor Author

jdoss commented Oct 4, 2023

I think I spoke too soon. When I ran @rhatdan's restorecon that was a temporary fix. Any new file that is created by the workload inside the container gets the wrong context still.

You can see that directories are created with the correct context system_u:object_r:container_file_t:s0

# setenforce 0
# podman run --rm -v ./worker-cache:/tmp/windmill/cache:z fedora ls -lahZ /tmp/windmill/cache/pip
total 0
drwxr-xr-x. 9 root root system_u:object_r:container_file_t:s0 155 Oct  4 05:22 .
drwxr-xr-x. 9 root root system_u:object_r:container_file_t:s0  86 Oct  4 05:21 ..
drwxr-xr-x. 4 root root system_u:object_r:container_file_t:s0  48 Oct  4 05:22 anyio==4.0.0
drwxr-xr-x. 4 root root system_u:object_r:container_file_t:s0  56 Oct  4 05:22 certifi==2023.7.22
drwxr-xr-x. 4 root root system_u:object_r:container_file_t:s0  45 Oct  4 05:22 h11==0.14.0
drwxr-xr-x. 4 root root system_u:object_r:container_file_t:s0  55 Oct  4 05:22 httpcore==0.18.0
drwxr-xr-x. 5 root root system_u:object_r:container_file_t:s0  60 Oct  4 05:22 httpx==0.25.0
drwxr-xr-x. 4 root root system_u:object_r:container_file_t:s0  44 Oct  4 05:22 idna==3.4
drwxr-xr-x. 4 root root system_u:object_r:container_file_t:s0  52 Oct  4 05:22 sniffio==1.3.0

But files have this additional context on them:

# podman run --rm -v ./worker-cache:/tmp/windmill/cache:z fedora ls -lahZ /tmp/windmill/cache/pip/sniffio==1.3.0/sniffio/_tests/__pycache__/__init__.cpython-311.pyc
-rw-r--r--. 1 root root system_u:object_r:container_file_t:s0:c4,c254 171 Oct  4 05:47 '/tmp/windmill/cache/pip/sniffio==1.3.0/sniffio/_tests/__pycache__/__init__.cpython-311.pyc'

Enable SELinux and you cannot access the file due to having additional context system_u:object_r:container_file_t:s0:c652,c1019 that the Windmill worker process created files.

# setenforce 1
# podman run --rm -v ./worker-cache:/tmp/windmill/cache:z fedora ls -lahZ /tmp/windmill/cache/pip/sniffio==1.3.0/sniffio/_tests/__pycache__/__init__.cpython-311.pyc
ls: cannot access '/tmp/windmill/cache/pip/sniffio==1.3.0/sniffio/_tests/__pycache__/__init__.cpython-311.pyc': Permission denied

Creating a file manually with another container sets the correct context

# podman run --rm -v ./worker-cache:/tmp/windmill/cache:z fedora touch /tmp/windmill/cache/pip/joetest
# podman run --rm -v ./worker-cache:/tmp/windmill/cache:z fedora ls -lahZ /tmp/windmill/cache/pip/joetest
-rw-r--r--. 1 root root system_u:object_r:container_file_t:s0 0 Oct  4 05:33 /tmp/windmill/cache/pip/joetest

Entering into the container that created the files it has the following context system_u:object_r:container_file_t:s0:c4,c254:

root@689f20478830:/tmp/windmill/cache# ls -lahZ /
total 63M
dr-xr-xr-x.   1 root root system_u:object_r:container_file_t:s0:c4,c254  103 Oct  4 05:46 .
dr-xr-xr-x.   1 root root system_u:object_r:container_file_t:s0:c4,c254  103 Oct  4 05:46 ..
drwxrwxrwx.   5   99   99 system_u:object_r:container_file_t:s0           41 Oct  4 05:46 alloc
drwxr-xr-x.   3 root root system_u:object_r:container_file_t:s0:c4,c254   78 Aug  9 16:47 aws
drwxr-xr-x.   1 root root system_u:object_r:container_file_t:s0:c4,c254   20 Sep 30 07:13 bin
drwxr-xr-x.   2 root root system_u:object_r:container_file_t:s0:c4,c254    6 Sep  3  2022 boot
drwxr-xr-x.   5 root root system_u:object_r:container_file_t:s0:c4,c254  340 Oct  4 05:46 dev
drwxr-xr-x.   1 root root system_u:object_r:container_file_t:s0:c4,c254   42 Oct  4 05:46 etc

And it can read the files it created just fine:

root@689f20478830:~# ls -lahZ /tmp/windmill/cache/pip/sniffio==1.3.0/sniffio/_tests/__pycache__/__init__.cpython-311.pyc
-rw-r--r--. 1 root root system_u:object_r:container_file_t:s0:c4,c254 171 Oct  4 05:47 '/tmp/windmill/cache/pip/sniffio==1.3.0/sniffio/_tests/__pycache__/__init__.cpython-311.pyc'

But any other container will get SELinux denials that share this volume mount because they cannot read a context of system_u:object_r:container_file_t:s0:c4,c254 only system_u:object_r:container_file_t:s0.

Looking into how Windmill handles pip downloads I found it's script which effectively runs the pip install command below.

root@689f20478830:/tmp/windmill/wk-689f20478830-kyW2a# ls -dZ /tmp/windmill/wk-689f20478830-kyW2a
system_u:object_r:container_file_t:s0:c4,c254 /tmp/windmill/wk-689f20478830-kyW2a
root@689f20478830:/tmp/windmill/wk-689f20478830-kyW2a# /usr/local/bin/python3 -m pip install -v bupy -I -t ../cache --no-cache --no-color --no-deps --isolated --no-warn-conflicts --disable-pip-version-check
Using pip 23.1.2 from /usr/local/lib/python3.11/site-packages/pip (python 3.11)
Collecting bupy
  Downloading bupy-0.1.2-py3-none-any.whl (15 kB)
Installing collected packages: bupy
  Creating /tmp/pip-target-203yrywv/bin
  changing mode of /tmp/pip-target-203yrywv/bin/bupy to 755
Successfully installed bupy-0.1.2
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
 
root@689f20478830:/tmp/windmill/wk-689f20478830-kyW2a# ls -lahZ ../cache/bupy/
total 52K
drwxr-xr-x.  3 root root system_u:object_r:container_file_t:s0:c4,c254  158 Oct  4 06:16 .
drwxr-xr-x. 12 root root system_u:object_r:container_file_t:s0          148 Oct  4 06:16 ..
-rw-r--r--.  1 root root system_u:object_r:container_file_t:s0:c4,c254   44 Oct  4 06:16 __init__.py
-rw-r--r--.  1 root root system_u:object_r:container_file_t:s0:c4,c254  125 Oct  4 06:16 __main__.py
drwxr-xr-x.  2 root root system_u:object_r:container_file_t:s0:c4,c254 4.0K Oct  4 06:16 __pycache__
-rw-r--r--.  1 root root system_u:object_r:container_file_t:s0:c4,c254 1.2K Oct  4 06:16 butane.py
-rw-r--r--.  1 root root system_u:object_r:container_file_t:s0:c4,c254 8.1K Oct  4 06:16 cli.py
-rw-r--r--.  1 root root system_u:object_r:container_file_t:s0:c4,c254 5.0K Oct  4 06:16 fcos.py
-rw-r--r--.  1 root root system_u:object_r:container_file_t:s0:c4,c254 3.3K Oct  4 06:16 qemu.py
-rw-r--r--.  1 root root system_u:object_r:container_file_t:s0:c4,c254 5.3K Oct  4 06:16 template.py
-rw-r--r--.  1 root root system_u:object_r:container_file_t:s0:c4,c254 1.1K Oct  4 06:16 util.py

Pip by default uses /tmp as a workdir to unpack Python packages and then it moves those files into the target directory. This brings the container context system_u:object_r:container_file_t:s0:c4,c254 along with those files and causes a bunch of drama.

If you set a temp TMPDIR to a directory inside of the volume mount it gets the right contexts.

root@689f20478830:/tmp/windmill/cache# TMPDIR=/tmp/windmill/cache/tmp /usr/local/bin/python3 -m pip install -v bupy -I -t ./pip --no-cache --no-color --no-deps --isolated --no-warn-conflicts --disable-pip-version-check
Using pip 23.1.2 from /usr/local/lib/python3.11/site-packages/pip (python 3.11)
Collecting bupy
  Downloading bupy-0.1.2-py3-none-any.whl (15 kB)
Installing collected packages: bupy
  Creating /tmp/windmill/cache/tmp/pip-target-p_2fyf4u/bin
  changing mode of /tmp/windmill/cache/tmp/pip-target-p_2fyf4u/bin/bupy to 755
Successfully installed bupy-0.1.2
WARNING: Target directory /tmp/windmill/cache/pip/bin already exists. Specify --upgrade to force replacement.
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
root@689f20478830:/tmp/windmill/cache# ls -lahZ pip/bupy
total 56K
drwxr-xr-x.  3 root root system_u:object_r:container_file_t:s0  158 Oct  4 06:26 .
drwxr-xr-x. 12 root root system_u:object_r:container_file_t:s0 4.0K Oct  4 06:26 ..
-rw-r--r--.  1 root root system_u:object_r:container_file_t:s0   44 Oct  4 06:26 __init__.py
-rw-r--r--.  1 root root system_u:object_r:container_file_t:s0  125 Oct  4 06:26 __main__.py
drwxr-xr-x.  2 root root system_u:object_r:container_file_t:s0 4.0K Oct  4 06:26 __pycache__
-rw-r--r--.  1 root root system_u:object_r:container_file_t:s0 1.2K Oct  4 06:26 butane.py
-rw-r--r--.  1 root root system_u:object_r:container_file_t:s0 8.1K Oct  4 06:26 cli.py
-rw-r--r--.  1 root root system_u:object_r:container_file_t:s0 5.0K Oct  4 06:26 fcos.py
-rw-r--r--.  1 root root system_u:object_r:container_file_t:s0 3.3K Oct  4 06:26 qemu.py
-rw-r--r--.  1 root root system_u:object_r:container_file_t:s0 5.3K Oct  4 06:26 template.py
-rw-r--r--.  1 root root system_u:object_r:container_file_t:s0 1.1K Oct  4 06:26 util.py

I will open an issue on the Windmill project to see how they want to handle this problem. I wanted to post my findings to get some closure on another SELinux adventure. I don't think this is a Podman specific issue and if you agree, feel free to close this issue.

@jdoss jdoss reopened this Oct 4, 2023
@giuseppe
Copy link
Member

giuseppe commented Oct 4, 2023

yes, I think this is not a podman issue since the command sets the label, thanks for investigating it.

Another option could be to force all these containers to run with the same label --security-opt=label=level:s0-s0:c4.c254 so even if they use the private label, it is still usable from the other containers. What do you think?

@jdoss
Copy link
Contributor Author

jdoss commented Oct 4, 2023

yes, I think this is not a podman issue since the command sets the label, thanks for investigating it.

Another option could be to force all these containers to run with the same label --security-opt=label=level:s0-s0:c4.c254 so even if they use the private label, it is still usable from the other containers. What do you think?

That is an awesome idea. I can set those easily in the Nomad podman driver config for these workers. I submitted a PR to the Windmill project that makes pip use a tmp directory inside the volume mount. If they don't accept that PR, I will use this. Thanks a ton @giuseppe!

@jdoss jdoss closed this as completed Oct 4, 2023
@jdoss
Copy link
Contributor Author

jdoss commented Oct 4, 2023

Errrm that context breaks the container from being able to write to itself.

root@0d388559baad:/# ls -lahZ /
total 63M
dr-xr-xr-x.   1 root root system_u:object_r:container_file_t:s0-s0:c4.c254   80 Oct  4 07:47 .
dr-xr-xr-x.   1 root root system_u:object_r:container_file_t:s0-s0:c4.c254   80 Oct  4 07:47 ..
drwxrwxrwx.   5   99   99 system_u:object_r:container_file_t:s0              41 Oct  4 07:47 alloc
drwxr-xr-x.   3 root root system_u:object_r:container_file_t:s0-s0:c4.c254   78 Aug  9 16:47 aws
drwxr-xr-x.   1 root root system_u:object_r:container_file_t:s0-s0:c4.c254   20 Sep 30 07:13 bin
drwxr-xr-x.   2 root root system_u:object_r:container_file_t:s0-s0:c4.c254    6 Sep  3  2022 boot
drwxr-xr-x.   5 root root system_u:object_r:container_file_t:s0-s0:c4.c254  340 Oct  4 07:47 dev
drwxr-xr-x.   1 root root system_u:object_r:container_file_t:s0-s0:c4.c254   42 Oct  4 07:47 etc
-rw-r--r--.   1 root root system_u:object_r:container_file_t:s0-s0:c4.c254  16M May 10 16:47 helm-v3.12.0-linux-amd64.tar.gz
drwxr-xr-x.   2 root root system_u:object_r:container_file_t:s0-s0:c4.c254    6 Sep  3  2022 home
-rw-r--r--.   1 root root system_u:object_r:container_file_t:s0-s0:c4.c254  47M Aug 10 06:40 kubectl
drwxr-xr-x.   1 root root system_u:object_r:container_file_t:s0-s0:c4.c254   52 Aug 10 06:39 lib
drwxr-xr-x.   2 root root system_u:object_r:container_file_t:s0-s0:c4.c254   34 Jun 12 00:00 lib64
drwxr-xr-x.   2 1001  123 system_u:object_r:container_file_t:s0-s0:c4.c254   38 Aug 10 06:40 linux-amd64
drwxrwxrwx.   2   99   99 system_u:object_r:container_file_t:s0               6 Oct  4 07:47 local
drwxr-xr-x.   2 root root system_u:object_r:container_file_t:s0-s0:c4.c254    6 Jun 12 00:00 media
drwxr-xr-x.   2 root root system_u:object_r:container_file_t:s0-s0:c4.c254    6 Jun 12 00:00 mnt
drwxr-xr-x.   1 root root system_u:object_r:container_file_t:s0-s0:c4.c254   23 Aug 10 06:40 opt
dr-xr-xr-x. 618 root root system_u:object_r:proc_t:s0                         0 Oct  4 07:47 proc
drwx------.   1 root root system_u:object_r:container_file_t:s0-s0:c4.c254   20 Sep  6 12:31 root
drwxr-xr-x.   1 root root system_u:object_r:container_file_t:s0-s0:c4.c254   42 Oct  4 07:47 run
drwxr-xr-x.   2 root root system_u:object_r:container_file_t:s0-s0:c4.c254 4.0K Jun 12 00:00 sbin
drwxrwxrwx.   2   99   99 system_u:object_r:container_file_t:s0             100 Oct  4 07:47 secrets
drwxr-xr-x.   2 root root system_u:object_r:container_file_t:s0-s0:c4.c254    6 Jun 12 00:00 srv
drwxr-xr-x.   5 root root system_u:object_r:container_file_t:s0-s0:c4.c254 4.0K Sep 30 07:10 static_frontend
dr-xr-xr-x.  13 root root system_u:object_r:sysfs_t:s0                        0 Oct  2 23:24 sys
drwxrwxrwt.   1 root root system_u:object_r:container_file_t:s0-s0:c4.c254   22 Oct  4 07:47 tmp
drwxr-xr-x.   1 root root system_u:object_r:container_file_t:s0-s0:c4.c254   19 Jun 12 00:00 usr
drwxr-xr-x.   1 root root system_u:object_r:container_file_t:s0-s0:c4.c254   41 Jun 12 00:00 var
root@0d388559baad:/# touch foo
touch: cannot touch 'foo': Permission denied

Any ideas @giuseppe ?

@jdoss
Copy link
Contributor Author

jdoss commented Oct 4, 2023

Setting --security-opt=label=level:s0:c4.c254 seems to work just fine. I am not sure if that is the best label or not.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Jan 3, 2024
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jan 3, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests

4 participants