Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Too many open files" even though the system limits are not reached #10367

Open
holahmeds opened this issue Jul 15, 2022 · 2 comments
Open

"Too many open files" even though the system limits are not reached #10367

holahmeds opened this issue Jul 15, 2022 · 2 comments
Labels

Comments

@holahmeds
Copy link

We are seeing this error sometimes (once in a few months). We get the following exception:

Caused by: org.rocksdb.RocksDBException: While open a file for random read: /rocks/database/1671607.sst: Too many open files
	at org.rocksdb.RocksDB.put(Native Method)
	at org.rocksdb.RocksDB.put(RocksDB.java:716)

All our investigations show that the number of file descriptors we have open are much less than the limits. Upon catching this exception we are calling lsof and saving the output, which shows that there were ~160K file handles. The system limit is set to ~655K.

$ ulimit -Hn
655350
$ ulimit -Sn
655350
$ cat /proc/9221/limits | grep "open files"
Max open files            655350               655350               files

We also ran some test programs (with lower limits set) and we have verified that limits are being honored. I don't see how rocksdb would be reaching anywhere close to the 655K limit though.

Our database currently has 570 sst files. max_open_files is set to -1 and we are using universal compaction. My understanding is that this means the number of sst files can double during compaction, which brings us to 1140 files.

We are using rocksdb version 6.13.3.

@ajkr ajkr added question up-for-grabs Up for grabs labels Jul 17, 2022
@linas
Copy link

linas commented Aug 4, 2022

I had a problem similar to this: I fixed it by setting max_open_files to a fixed number -- I recoomed a little over the double of your *.sst files. My code does this:

   struct rlimit maxfh;
   getrlimit(RLIMIT_NOFILE, &maxfh);
   size_t max_of = maxfh.rlim_cur;
   if (256 < max_of) max_of -= 230;
   else
      throw IOException(TRACE_INFO,
         "Open file limit too low. Set ulimit -n 1024 or larger!");

   options.max_open_files = max_of;

I subtract 230 because other parts of my system uses 230 file descs. the ulimit -a default is 1K, so max open files of 655K sounds crazy to me -- you're probably hurting performance with that kind of limit.

@theolivenbaum
Copy link
Contributor

Seeing a similar issue with some of our users - the strange thing is that they're getting Too many open files on windows (which shouldn't be affected by this limit)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants