Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[enhancement](file-cache) limit the file cache handle num and init the file cache concurrently #22919

Merged
merged 3 commits into from
Aug 17, 2023

Conversation

morningman
Copy link
Contributor

@morningman morningman commented Aug 13, 2023

Proposed changes

  1. the real value of BE config file_cache_max_file_reader_cache_size will be the 1/3 of process's max open file number.
  2. use thread pool to create or init the file cache concurrently.
    To solve the issue that when there are lots of files in file cache dir, the starting time of BE will be very slow because
    it will traverse all file cache dirs sequentially.

Further comments

If this is a relatively large or complex change, kick off the discussion at [email protected] by explaining why you chose the solution you did and what alternatives you considered, etc...

@morningman morningman changed the title [enhancement](file-cache) limit the file cache' [enhancement](file-cache) limit the file cache handle num and init the file cache concurrently Aug 13, 2023
@github-actions
Copy link
Contributor

clang-tidy review says "All clean, LGTM! 👍"

@@ -205,5 +207,21 @@ size_t IFileCache::file_reader_cache_size() {
return s_file_name_to_reader.size();
}

void IFileCache::init() {
struct rlimit limit;
if (getrlimit(RLIMIT_NOFILE, &limit) != 0) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So if failed to get the rlimit, config::file_cache_max_file_reader_cache_size will be not used to set _max_file_reader_cache_size

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if fail, the LOG(FATAL) will crash the BE.

@github-actions
Copy link
Contributor

clang-tidy review says "All clean, LGTM! 👍"

@morningman
Copy link
Contributor Author

run buildall

@hello-stephen
Copy link
Contributor

(From new machine)TeamCity pipeline, clickbench performance test result:
the sum of best hot time: 45.67 seconds
stream load tsv: 515 seconds loaded 74807831229 Bytes, about 138 MB/s
stream load json: 20 seconds loaded 2358488459 Bytes, about 112 MB/s
stream load orc: 65 seconds loaded 1101869774 Bytes, about 16 MB/s
stream load parquet: 31 seconds loaded 861443392 Bytes, about 26 MB/s
insert into select: 29.3 seconds inserted 10000000 Rows, about 341K ops/s
storage size: 17162288766 Bytes

@morningman
Copy link
Contributor Author

run external

1 similar comment
@zhangguoqiang666
Copy link
Contributor

run external

@zhangguoqiang666
Copy link
Contributor

run buildall

@zhangguoqiang666
Copy link
Contributor

run external

1 similar comment
@zhangguoqiang666
Copy link
Contributor

run external

@hello-stephen
Copy link
Contributor

(From new machine)TeamCity pipeline, clickbench performance test result:
the sum of best hot time: 44.83 seconds
stream load tsv: 517 seconds loaded 74807831229 Bytes, about 137 MB/s
stream load json: 20 seconds loaded 2358488459 Bytes, about 112 MB/s
stream load orc: 65 seconds loaded 1101869774 Bytes, about 16 MB/s
stream load parquet: 32 seconds loaded 861443392 Bytes, about 25 MB/s
insert into select: 28.9 seconds inserted 10000000 Rows, about 346K ops/s
storage size: 17162138449 Bytes

@morningman
Copy link
Contributor Author

run buildall

@github-actions
Copy link
Contributor

clang-tidy review says "All clean, LGTM! 👍"

@hello-stephen
Copy link
Contributor

(From new machine)TeamCity pipeline, clickbench performance test result:
the sum of best hot time: 48.37 seconds
stream load tsv: 536 seconds loaded 74807831229 Bytes, about 133 MB/s
stream load json: 20 seconds loaded 2358488459 Bytes, about 112 MB/s
stream load orc: 65 seconds loaded 1101869774 Bytes, about 16 MB/s
stream load parquet: 31 seconds loaded 861443392 Bytes, about 26 MB/s
insert into select: 29.4 seconds inserted 10000000 Rows, about 340K ops/s
storage size: 17162144885 Bytes

@AshinGau
Copy link
Member

LGTM

@github-actions
Copy link
Contributor

PR approved by at least one committer and no changes requested.

@github-actions github-actions bot added approved Indicates a PR has been approved by one committer. reviewed labels Aug 17, 2023
@github-actions
Copy link
Contributor

PR approved by anyone and no changes requested.

Copy link
Contributor

@kaka11chen kaka11chen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@AshinGau AshinGau merged commit 330f369 into apache:master Aug 17, 2023
16 of 17 checks passed
xiaokang pushed a commit that referenced this pull request Aug 17, 2023
…e file cache concurrently (#22919)

1. the real value of BE config `file_cache_max_file_reader_cache_size` will be the 1/3 of process's max open file number.
2. use thread pool to create or init the file cache concurrently.
    To solve the issue that when there are lots of files in file cache dir, the starting time of BE will be very slow because
    it will traverse all file cache dirs sequentially.
airborne12 pushed a commit to airborne12/apache-doris that referenced this pull request Aug 21, 2023
…e file cache concurrently (apache#22919)

1. the real value of BE config `file_cache_max_file_reader_cache_size` will be the 1/3 of process's max open file number.
2. use thread pool to create or init the file cache concurrently.
    To solve the issue that when there are lots of files in file cache dir, the starting time of BE will be very slow because
    it will traverse all file cache dirs sequentially.
@xiaokang xiaokang mentioned this pull request Aug 30, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by one committer. dev/2.0.1-merged p1_query_exception reviewed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants