Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase open file limit in the VM / inside containers #713

Open
jandubois opened this issue Oct 1, 2021 · 10 comments
Open

Increase open file limit in the VM / inside containers #713

jandubois opened this issue Oct 1, 2021 · 10 comments
Labels
area/config kind/enhancement New feature or request
Milestone

Comments

@jandubois
Copy link
Member

Report on user-slack: https://rancher-users.slack.com/archives/C0200L1N1MM/p1633033996169800

pod ulimits are too low for my use case. Specifically open files. Any recommended way to increase it?

Default setting for ulimit -n is 1024, same as in most distros. Should we increase it?

@jandubois jandubois added the kind/enhancement New feature or request label Oct 1, 2021
@eldada
Copy link

eldada commented Oct 28, 2021

I'm unable to run a container with nerdctl due to a "Too many open files" error in my app.
This enhancement should be part of the user's VM configuration to allow bigger applications to use it.

@mattfarina mattfarina added this to the v1.0.0 milestone Oct 28, 2021
@jandubois
Copy link
Member Author

This bug is really about the limit inside containers scheduled by kubernetes.

For running containers directly via nerdctl you can specify the limit on the commandline:

$ nerdctl run --rm alpine sh -c "ulimit -n"
1024
$ nerdctl run --ulimit nofile=4096:4096 --rm alpine sh -c "ulimit -n"
4096
$ nerdctl run --ulimit nofile=8192:8192 --rm alpine sh -c "ulimit -n"
8192

@stephenpope
Copy link

Trying rancher-desktop and hit these errors with OOTB Helm charts for these applications ..

ElasticSEarch

1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/logs/elasticsearch.log

Neo4J

WARNING: Max 1024 open files allowed, minimum of 40000 recommended. See the Neo4j manual.

RabbitMQ

rabbitmq 17:46:43.55 INFO  ==> Initializing RabbitMQ...
/opt/bitnami/scripts/librabbitmq.sh: line 750: ulimit: open files: cannot modify limit: Operation not permitted

@gaktive gaktive modified the milestones: v1.0.0-beta, v1.0.0 Jan 7, 2022
@yuchanns
Copy link

yuchanns commented Jan 10, 2022

Same issue here while I install tidb by helm with helm install tidb-cluster pingcap/tidb-cluster --version=v1.2.6 --namespace=tidb-cluster then got a CrashLoopBackOff about tikv:

[2022/01/10 15:23:41.864 +00:00] [FATAL] [server.rs:1102] ["the maximum number of open file descriptors is too small, got 1024, expect greater or equal to 82920"]

What should I do?

Apple M1 Max

@jandubois
Copy link
Member Author

Since nobody indicated which platform they are using, I'm just assuming macOS (or Linux) now. If you are on Windows, this will not work:

Create an ~/Library/Application Support/rancher-desktop/lima/_config/override.yaml file with a provisioning script:

provision:
- mode: system
  script: |
    #!/bin/sh
    cat <<'EOF' > /etc/security/limits.d/rancher-desktop.conf
    * soft     nofile         82920
    * hard     nofile         82920
    EOF

Stop and restart Rancher Desktop, and you should have updated limits in your containers. I've verified this with RabbitMQ; after the restart the container started up automatically.

I've also checked the dockerd configuration, which seems to have a larger nofile limit by default.

@gaktive gaktive modified the milestones: v1.0.0, v1.1.0 Jan 15, 2022
@sveriger
Copy link

Thx @jandubois this workarround works for Elasticsearch, too @stephenpope

@jandubois
Copy link
Member Author

Thx @jandubois this workarround works for Elasticsearch, too @stephenpope

I thought for Elastic you also had to increase the vm.max_map_count setting. So for anybody else finding this issue, add

sysctl -w vm.max_map_count=262144

to the provisioning script if you need to update the count for Elastic.

@stephenpope
Copy link

Tested with RD 1.0.0 and this gets everything I needed running (Elastic/RabbitMQ/Neo4J/SQL Server) 🥳 .. i had to increase the limit to 1000121 for HAProxy if anyone is keeping track of these values :)

@xavierivadulla
Copy link

Hi all,

Same issue but in Windows... It will be really appreciate any help...

Bests

@gaktive gaktive added this to the Later milestone Aug 23, 2022
@brunoml
Copy link

brunoml commented Sep 9, 2022

I got it working on windows creating a file at %AppData%\rancher-desktop\provisioning\map_count.start with content:

#!/bin/sh

sysctl -w vm.max_map_count=262144

The just close and start the rancher desktop again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/config kind/enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

10 participants