Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

K8s StatefulSet example should include SYS_RESOURCE capability #1947

Closed
onitake opened this issue Sep 17, 2019 · 2 comments
Closed

K8s StatefulSet example should include SYS_RESOURCE capability #1947

onitake opened this issue Sep 17, 2019 · 2 comments

Comments

@onitake
Copy link

onitake commented Sep 17, 2019

As reported in several places:

It's currently not possible to run m3dbnode in a Kubernetes container without raising the max number of file descriptors. Setting kernel paramters is not enough - they only raise the upper system limit, but do not cause the container runtime or k8s to remove the limit per container process. In our case, the default limit seems to be 65536 file descriptors, and it's unclear where this is coming from.

However, setting rlimits from inside m3dbnode does not work unless the container has the SYS_RESOURCE capability. Enabling this capability in turn requires changing the security context, which requires a suitable pod security policy. Non of this is ideal, but at least it allows the container to raise its limits. Otherwise, it will simply get an error message at startup and simply crash later on because 65356 fds don't seem to be enough for a simple cluster.

I propose that https://github.com/m3db/m3/blob/master/kube/m3dbnode-statefulset.yaml is changed so it includes the following security context in the container template:

        securityContext:
          capabilities:
            add:
            - SYS_RESOURCE

Note that this should be inside the m3db container specification, not into the spec section.

@martin-mao
Copy link
Collaborator

@schallert ^

@schallert
Copy link
Collaborator

Hey @onitake, sorry for losing track of this but we finally fixed this in #2174.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants