You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are seeing this error sometimes (once in a few months). We get the following exception:
Caused by: org.rocksdb.RocksDBException: While open a file for random read: /rocks/database/1671607.sst: Too many open files
at org.rocksdb.RocksDB.put(Native Method)
at org.rocksdb.RocksDB.put(RocksDB.java:716)
All our investigations show that the number of file descriptors we have open are much less than the limits. Upon catching this exception we are calling lsof and saving the output, which shows that there were ~160K file handles. The system limit is set to ~655K.
We also ran some test programs (with lower limits set) and we have verified that limits are being honored. I don't see how rocksdb would be reaching anywhere close to the 655K limit though.
Our database currently has 570 sst files. max_open_files is set to -1 and we are using universal compaction. My understanding is that this means the number of sst files can double during compaction, which brings us to 1140 files.
We are using rocksdb version 6.13.3.
The text was updated successfully, but these errors were encountered:
I had a problem similar to this: I fixed it by setting max_open_files to a fixed number -- I recoomed a little over the double of your *.sst files. My code does this:
struct rlimit maxfh;
getrlimit(RLIMIT_NOFILE, &maxfh);
size_t max_of = maxfh.rlim_cur;
if (256 < max_of) max_of -= 230;
else
throw IOException(TRACE_INFO,
"Open file limit too low. Set ulimit -n 1024 or larger!");
options.max_open_files = max_of;
I subtract 230 because other parts of my system uses 230 file descs. the ulimit -a default is 1K, so max open files of 655K sounds crazy to me -- you're probably hurting performance with that kind of limit.
Seeing a similar issue with some of our users - the strange thing is that they're getting Too many open files on windows (which shouldn't be affected by this limit)
We are seeing this error sometimes (once in a few months). We get the following exception:
All our investigations show that the number of file descriptors we have open are much less than the limits. Upon catching this exception we are calling
lsof
and saving the output, which shows that there were ~160K file handles. The system limit is set to ~655K.We also ran some test programs (with lower limits set) and we have verified that limits are being honored. I don't see how rocksdb would be reaching anywhere close to the 655K limit though.
Our database currently has 570 sst files.
max_open_files
is set to -1 and we are using universal compaction. My understanding is that this means the number of sst files can double during compaction, which brings us to 1140 files.We are using rocksdb version 6.13.3.
The text was updated successfully, but these errors were encountered: