Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Non root user in docker.io/eclipse-mosquitto container cannot write to any directory under podman 2.0.2 #6989

Closed
Lalufu opened this issue Jul 15, 2020 · 16 comments · Fixed by #7005
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@Lalufu
Copy link

Lalufu commented Jul 15, 2020

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

The docker.io/eclipse-mosquitto container contains a binary that drops privileges to UID/GID 1883. When run under podman 2.0.2 that user is not anle to write to any file/directory, regardless of permissions. When run under podman 1.8.2 things work.

Steps to reproduce the issue:

podman 2.0.2:

$ sudo podman run -it --rm docker.io/eclipse-mosquitto:1.6 /bin/sh
/ # cat > /tmp/mosquitto.conf <<EOF
> log_dest file /tmp/mosquitto.log
> EOF                          
/ # ls -ld /tmp                
drwxrwxrwt    1 root     root            28 Jul 15 19:02 /tmp
/ # /usr/sbin/mosquitto -c /tmp/mosquitto.conf
1594839785: Error: Unable to open log file /tmp/mosquitto.log for writing.

podman 1.8.2:

$ sudo podman run -it --rm docker.io/eclipse-mosquitto:1.6 /bin/sh
/ # cat > /tmp/mosquitto.conf <<EOF
> log_dest file /tmp/mosquitto.log
> EOF
/ # /usr/sbin/mosquitto -c /tmp/mosquitto.conf
^C
/ # ls -ln /tmp
total 8
-rw-r--r--    1 0        0               33 Jul 15 19:05 mosquitto.conf
-rw-------    1 1883     1883           253 Jul 15 19:05 mosquitto.log

Describe the results you received:
Writing files as the 1883 user inside the container fails even in locations that should succeed under 2.0.2

Describe the results you expected:
Same as under 1.8.2, the write should succeed.

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

1.8.2:

Version:            1.8.2
RemoteAPI Version:  1
Go Version:         go1.14
OS/Arch:            linux/amd64

2.0.2:

Version:      2.0.2
API Version:  1
Go Version:   go1.14.3
Built:        Thu Jan  1 01:00:00 1970
OS/Arch:      linux/amd64

Output of podman info --debug:

debug:                         
  compiler: gc                 
  git commit: ""               
  go version: go1.14           
  podman version: 1.8.2        
host:                          
  BuildahVersion: 1.14.3       
  CgroupVersion: v2            
  Conmon:                      
    package: conmon-2.0.18-1.fc32.x86_64
    path: /usr/bin/conmon      
    version: 'conmon version 2.0.18, commit: 6e8799f576f11f902cd8a8d8b45b2b2caf636a85'
  Distribution:                
    distribution: fedora       
    version: "32"              
  MemFree: 10011795456         
  MemTotal: 16705478656        
  OCIRuntime:                  
    name: crun                 
    package: crun-0.14-2.fc32.x86_64
    path: /usr/bin/crun        
    version: |-                
      crun version 0.14        
      commit: ebc56fc9bcce4b3208bb0079636c80545122bf58
      spec: 1.0.0              
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  SwapFree: 1073737728         
  SwapTotal: 1073737728        
  arch: amd64                  
  cpus: 4                      
  eventlogger: journald        
  hostname: faith.camperquake.de
  kernel: 5.7.7-200.fc32.x86_64
  os: linux                    
  rootless: false              
  uptime: 1h 58m 31.9s (Approximately 0.04 days)
registries:                    
  search:                      
  - registry.fedoraproject.org 
  - registry.access.redhat.com 
  - registry.centos.org        
  - docker.io                  
store:                         
  ConfigFile: /etc/containers/storage.conf
  ContainerStore:              
    number: 0                  
  GraphDriverName: overlay     
  GraphOptions:                
    overlay.mountopt: nodev,metacopy=on
  GraphRoot: /var/lib/containers/storage
  GraphStatus:                 
    Backing Filesystem: btrfs  
    Native Overlay Diff: "false"
    Supports d_type: "true"    
    Using metacopy: "true"     
  ImageStore:                  
    number: 1                  
  RunRoot: /var/run/containers/storage
  VolumePath: /var/lib/containers/storage/volumes

2.0.2:

host:                          
  arch: amd64                  
  buildahVersion: 1.15.0       
  cgroupVersion: v2            
  conmon:                      
    package: conmon-2.0.18-1.fc32.x86_64
    path: /usr/bin/conmon      
    version: 'conmon version 2.0.18, commit: 6e8799f576f11f902cd8a8d8b45b2b2caf636a85'
  cpus: 4                      
  distribution:                
    distribution: fedora       
    version: "32"              
  eventLogger: file            
  hostname: faith.camperquake.de
  idMappings:                  
    gidmap: null               
    uidmap: null               
  kernel: 5.7.7-200.fc32.x86_64
  linkmode: dynamic            
  memFree: 9950269440          
  memTotal: 16705478656        
  ociRuntime:                  
    name: crun                 
    package: crun-0.14-2.fc32.x86_64
    path: /usr/bin/crun        
    version: |-                
      crun version 0.14        
      commit: ebc56fc9bcce4b3208bb0079636c80545122bf58
      spec: 1.0.0              
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux                    
  remoteSocket:                
    path: /run/podman/podman.sock
  rootless: false              
  slirp4netns:                 
    executable: ""             
    package: ""                
    version: ""                
  swapFree: 1073737728         
  swapTotal: 1073737728        
  uptime: 2h 1m 11.39s (Approximately 0.08 days)
registries:                    
  search:                      
  - registry.fedoraproject.org 
  - registry.access.redhat.com 
  - registry.centos.org        
  - docker.io                  
store:                         
  configFile: /etc/containers/storage.conf
  containerStore:              
    number: 0                  
    paused: 0                  
    running: 0                 
    stopped: 0                 
  graphDriverName: overlay     
  graphOptions:                
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphStatus:                 
    Backing Filesystem: btrfs  
    Native Overlay Diff: "false"
    Supports d_type: "true"    
    Using metacopy: "true"     
  imageStore:                  
    number: 1                  
  runRoot: /var/run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:                       
  APIVersion: 1                
  Built: 0                     
  BuiltTime: Thu Jan  1 01:00:00 1970
  GitCommit: ""                
  GoVersion: go1.14.3          
  OsArch: linux/amd64          
  Version: 2.0.2               

Package info (e.g. output of rpm -q podman or apt list podman):

1.8.2:

podman-1.8.2-2.fc32.x86_64

2.0.2:

podman-2.0.2-1.fc32.x86_64

Additional environment details (AWS, VirtualBox, physical, etc.):

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jul 15, 2020
@rhatdan
Copy link
Member

rhatdan commented Jul 15, 2020

Your reproducer worked fine for me.

@rhatdan
Copy link
Member

rhatdan commented Jul 15, 2020

#  sudo podman run -it --rm docker.io/eclipse-mosquitto:1.6 /bin/sh
/ # cat > /tmp/mosquitto.conf <<EOF
> log_dest file /tmp/mosquitto.log
> EOF
/ # id
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(tape),27(video)
/ # /usr/sbin/mosquitto -c /tmp/mosquitto.conf
/ # ls /tmp/
mosquitto.conf  mosquitto.log
/ # ls /tmp/ -l
total 8
-rw-r--r--    1 root     root            33 Jul 15 19:57 mosquitto.conf
-rw-------    1 mosquitt mosquitt       253 Jul 15 19:57 mosquitto.log

@Lalufu
Copy link
Author

Lalufu commented Jul 15, 2020

Hm. It (reproducibly) does not work for me on two different machines. I'm at a bit of a loss on how to debug this.

@rhatdan
Copy link
Member

rhatdan commented Jul 15, 2020

Could you try it privileged? Did you try this as non root?

@rhatdan
Copy link
Member

rhatdan commented Jul 15, 2020

One curios think from your example above is that

-rw-r--r-- 1 0 0 33 Jul 15 19:05 mosquitto.conf
-rw------- 1 1883 1883 253 Jul 15 19:05 mosquitto.log

mosquitto.conf should have some size?

@Lalufu
Copy link
Author

Lalufu commented Jul 16, 2020

I did run all of this as root (using sudo from my normal user account). I could try this from a "real" root shell if you think that will make any difference?

The file size is 33, those 0's are the uid and gid.

@Lalufu
Copy link
Author

Lalufu commented Jul 16, 2020

OK, using podman 2.0.2, running under a shell spawned by "sudo su -" does work. Running the same podman command using "sudo" from my user shell directly does not work.
The only thing that comes to mind immediately is that my normal user shell has a umask of 0077, while the root shell has a umask of 0022.
I'm still puzzled why this matters for 2.0.2 but not for 1.8.2.

@rhatdan
Copy link
Member

rhatdan commented Jul 16, 2020

If you change your umask does it work?

@Lalufu
Copy link
Author

Lalufu commented Jul 16, 2020

It does.

$ umask
0077
$ umask 0022
$ umask
0022
$ sudo podman run -it --rm docker.io/eclipse-mosquitto:1.6 /bin/sh
/ # cat > /tmp/mosquitto.conf <<EOF
> log_dest file /tmp/mosquitto.log
> EOF
/ # /usr/sbin/mosquitto -c /tmp/mosquitto.conf
^C/ # ls -l /tmp
total 8
-rw-r--r--    1 root     root            33 Jul 16 18:59 mosquitto.conf
-rw-------    1 mosquitt mosquitt       253 Jul 16 18:59 mosquitto.log
/ #

@rhatdan
Copy link
Member

rhatdan commented Jul 17, 2020

$ umask 0077
$ sudo umask
0077
$ sudo podman run -ti fedora umask
0022

Strange since it seems the container runtime switches the unmask inside of the container, but that probably happens after the image is setup.
It seems the umask is effecting the way the container is being configured.

@rhatdan
Copy link
Member

rhatdan commented Jul 17, 2020

I wonder if podman should force the umask to 0022 when it starts up, to avoid this kind of issue.

@rhatdan
Copy link
Member

rhatdan commented Jul 17, 2020

@giuseppe @mheon @baude WDYT?

@Lalufu
Copy link
Author

Lalufu commented Jul 17, 2020

I have to admit that finding out that it's the umask that triggers the behaviour has done little to clear up my understanding of what the exact mechanism of failure is. The writes in the test case don't go to a permanent storage, they disappear when the container is stopped. They go to some sort of temporary overlay, I assume?
Why can root write to that overlay despite the umask, but non-root users cannot? 0022 still prevents non owners from writing, if it's a file system permission on the host somewhere, and the file is owned by root.
Why did this work under 1.8.2?

@giuseppe
Copy link
Member

Strange since it seems the container runtime switches the unmask inside of the container, but that probably happens after the image is setup.
It seems the umask is effecting the way the container is being configured.

yes the OCI runtime forces the mask to 0022 (if not configured differently), but podman doesn't.

I remember adding the umask check at some point to podman, it probably got lost with the migration to 2.0

@giuseppe
Copy link
Member

yes it is a regression introduced by 241326a. It drops the call to setUMask

@giuseppe
Copy link
Member

PR here: #7005

giuseppe added a commit to giuseppe/libpod that referenced this issue Jul 17, 2020
the code got lost in the migration to podman 2.0, reintroduce it.

Closes: containers#6989

Signed-off-by: Giuseppe Scrivano <[email protected]>
mheon pushed a commit to mheon/libpod that referenced this issue Jul 22, 2020
the code got lost in the migration to podman 2.0, reintroduce it.

Closes: containers#6989

Signed-off-by: Giuseppe Scrivano <[email protected]>

<MH: Fixed build>

Signed-off-by: Matthew Heon <[email protected]>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 23, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 23, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants