You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When deploying K8S with a large set of services, including especially Rook/Ceph and Kubevirt on the same cluster, containers that heavily use inotify start erroring out.
Nowdays, the Linux kernel sets the max_user_watches max number to 1048576, and in the [8192, 1048576] range according to the RAM size.
See: torvalds/linux@9289012
It would be nice to document this behaviour and suggest setting a bigger value in case of large K8S deployment (with an Ignition Butane example).
Description
When deploying K8S with a large set of services, including especially Rook/Ceph and Kubevirt on the same cluster, containers that heavily use
inotify
start erroring out.Nowdays, the Linux kernel sets the max_user_watches max number to 1048576, and in the [8192, 1048576] range according to the RAM size.
See: torvalds/linux@9289012
It would be nice to document this behaviour and suggest setting a bigger value in case of large K8S deployment (with an Ignition Butane example).
See: https://www.suse.com/support/kb/doc/?id=000020048
The text was updated successfully, but these errors were encountered: