Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(rocksdb): Select the option of Direct-IO in Rocksdb #450

Merged
merged 9 commits into from
Jan 10, 2020

Conversation

Smityz
Copy link
Contributor

@Smityz Smityz commented Dec 24, 2019

https://github.com/facebook/rocksdb/wiki/Direct-IO

在v1.12.0版本下RocksDB开启Direct IO的性能测试

1 背景

对RocksDB开启Direct IO后的性能对照测试

2 实验环境

Linux version 3.18.6-2.el7.centos.x86_64
CentOS Linux release 7.3.1611 (Core)
Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
Mem :128G
SSD_480G*8[SATA_2.5_]

三台meta节点,五台replica节点

3 测试结果

1 单点测试

(开启之前/开启之后)

测试Case 读写比 运行时长 QPS 读Avg延迟 读P99延迟 TPS 写Avg延迟 写P99延迟
(1)数据加载: 3客户端*10线程 0:1 2.1 - - - 40439/41093 739/730 2995/2801
(2)读写同时: 3客户端*15线程 1:3 1.3 16022/15925 309/376 759/1981 48078/48078 830/816 3995/3987
(3)读写同时: 3客户端*30线程 30:1 0.5 244392/243314 346/344 652/730 8137/8174 731/717 2995/2869
(4)数据只读: 6客户端*100线程 1:0 0.3 672737/612085 914/1053 3205/2841 - - -

2 范围测试

测试Case 读写比 运行时长 QPS 读Avg延迟 读P99延迟 TPS 写Avg延迟 写P99延迟
(1)数据只读: 客户端10线程range 1:0 0.83 153276 188/196 411/428 - - -

测试总结

  1. 各项资源占用指标相较于之前并无显著差异,如cpu占用率、内存可用率(available)
  2. 在RocksDB升级到v5.15版本后能开启自动预读功能(Automatic iterator-readahead),性能期望能有提高

新的配置项

[pegasus.server]
rocksdb_use_direct_reads = false
rocksdb_use_direct_io_for_flush_and_compaction = false
rocksdb_compaction_readahead_size = 2MB
rocksdb_writable_file_max_buffer_size = 1MB

@neverchanje neverchanje added the type/config-change Added or modified configuration that should be noted on release note of new version. label Jan 10, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1.12.3 type/config-change Added or modified configuration that should be noted on release note of new version.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants