Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[docs] Document the inotify/max_user_watches requirements #1428

Open
ader1990 opened this issue Apr 16, 2024 · 1 comment
Open

[docs] Document the inotify/max_user_watches requirements #1428

ader1990 opened this issue Apr 16, 2024 · 1 comment
Assignees

Comments

@ader1990
Copy link

Description

When deploying K8S with a large set of services, including especially Rook/Ceph and Kubevirt on the same cluster, containers that heavily use inotify start erroring out.

Nowdays, the Linux kernel sets the max_user_watches max number to 1048576, and in the [8192, 1048576] range according to the RAM size.
See: torvalds/linux@9289012

It would be nice to document this behaviour and suggest setting a bigger value in case of large K8S deployment (with an Ignition Butane example).

cat /proc/sys/fs/inotify/max_user_instances
cat /proc/sys/fs/inotify/max_user_watches

See: https://www.suse.com/support/kb/doc/?id=000020048

@ader1990 ader1990 added the kind/bug Something isn't working label Apr 16, 2024
@ader1990 ader1990 added kind/docs and removed kind/bug Something isn't working labels Apr 16, 2024
@ader1990 ader1990 self-assigned this Apr 16, 2024
@jepio
Copy link
Member

jepio commented Apr 16, 2024

@ader1990 how about we document this and ship a default sysctl setting that is good enough for a decently sized cluster?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: 📝 Needs Triage
Development

No branches or pull requests

2 participants