You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently backup pods run as some uid. If a volume being mounted has no other/public read/access permission, then the backup will fail to backup this volume.
bash-5.1$ id
uid=65532 gid=0(root) groups=0(root)
bash-5.1$ ls -alh /data/
total 28K
drwxr-xr-x 3 root root 19 Aug 22 21:44 .
drwxr-xr-x 1 root root 54 Aug 22 21:44 ..
drwxrws--- 12 10000 10001 30.0K Aug 21 21:49 nginx
bash-5.1$ ls -alh /data/nginx/
ls: can't open '/data/nginx/': Permission denied
The backup sees throws an error as it has no permission to access this directory, unfortunately (or fortunately?) the backup still progresses through any other backups it needs to do though so the backup still "succeeds".
1.7243634935042348e+09 ERROR k8up.restic.restic.backup.progress /data/nginx during scan {"error": "error occurred during backup"}
github.com/k8up-io/k8up/v2/restic/logging.(*BackupOutputParser).out
/home/runner/work/k8up/k8up/restic/logging/logging.go:156
github.com/k8up-io/k8up/v2/restic/logging.writer.Write
/home/runner/work/k8up/k8up/restic/logging/logging.go:103
io.copyBuffer
/opt/hostedtoolcache/go/1.19.2/x64/src/io/io.go:429
io.Copy
/opt/hostedtoolcache/go/1.19.2/x64/src/io/io.go:386
os/exec.(*Cmd).writerDescriptor.func1
/opt/hostedtoolcache/go/1.19.2/x64/src/os/exec/exec.go:407
os/exec.(*Cmd).Start.func1
/opt/hostedtoolcache/go/1.19.2/x64/src/os/exec/exec.go:544
1.724363493509051e+09 INFO k8up.restic.restic.backup.progress progress of backup {"percentage": "0.00%"}
1.7243634938520162e+09 ERROR k8up.restic.restic.backup.progress /data/nginx during archival {"error": "error occurred during backup"}
github.com/k8up-io/k8up/v2/restic/logging.(*BackupOutputParser).out
/home/runner/work/k8up/k8up/restic/logging/logging.go:156
github.com/k8up-io/k8up/v2/restic/logging.writer.Write
/home/runner/work/k8up/k8up/restic/logging/logging.go:103
io.copyBuffer
/opt/hostedtoolcache/go/1.19.2/x64/src/io/io.go:429
io.Copy
/opt/hostedtoolcache/go/1.19.2/x64/src/io/io.go:386
os/exec.(*Cmd).writerDescriptor.func1
/opt/hostedtoolcache/go/1.19.2/x64/src/os/exec/exec.go:407
os/exec.(*Cmd).Start.func1
/opt/hostedtoolcache/go/1.19.2/x64/src/os/exec/exec.go:544
And updating the schedule to include this as the global podconfig for the schedule, or just the backups. I think it probably is only relevant to backup in this case.
This could have other impacts, so will need more verification before rolled out, considering that #339 would allow more volumes to be created, and they may not have the same permissions as one that is created/modified by the nginx init container when rootless workloads are enabled
The text was updated successfully, but these errors were encountered:
Currently backup pods run as some uid. If a volume being mounted has no
other/public
read/access permission, then the backup will fail to backup this volume.The backup sees throws an error as it has no permission to access this directory, unfortunately (or fortunately?) the backup still progresses through any other backups it needs to do though so the backup still "succeeds".
With rootless workloads enabled in an environment, it would be possible to configure the backups to use the same
runAsUser
andfsGroup
to run the backup pod by passing through aPodConfig
. https://docs.k8up.io/k8up/2.11/how-tos/schedules.html#_customize_pod_specThis feature appears to only be supported in k8up v2 though, so only clusters using that version would be able to benefit from this.
Running a one-off backup in a namespace with a volume with the above permissioned volume with a defined
podSecurityContext
in it like thisWhen the backup pod runs, we can see that the pod runs as the user, and has access to the volume now
It may be worth extending the backup templating to generate a
PodConfig
like soAnd updating the schedule to include this as the global podconfig for the schedule, or just the backups. I think it probably is only relevant to
backup
in this case.This could have other impacts, so will need more verification before rolled out, considering that #339 would allow more volumes to be created, and they may not have the same permissions as one that is created/modified by the nginx init container when rootless workloads are enabled
The text was updated successfully, but these errors were encountered: