Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

raft follower async committlogs #5723

Closed
tangyuanzhang opened this issue Sep 20, 2023 · 1 comment
Closed

raft follower async committlogs #5723

tangyuanzhang opened this issue Sep 20, 2023 · 1 comment
Labels
type/enhancement Type: make the code neat or more efficient

Comments

@tangyuanzhang
Copy link
Contributor

tangyuanzhang commented Sep 20, 2023

背景:
raft的写入流程:

  1. client 发送请求
  2. server 端都到请求,本地写wal
  3. 同步数据到folowerfollower 本地写wal
  4. follower apply数据到状态机
  5. follower 响应leader
  6. leader apply数据到状态机
  7. leader 相应客户

优化后的raft的写入流程:

  1. client 发送请求
  2. server 端都到请求,本地写wal
  3. 同步数据到folowerfollower 本地写wal,通知异步线程 apply数据到状态机
  4. follower 响应leader
  5. leader apply数据到状态机
  6. leader 相应客户

方案设计

raft follower 在处理 appenlogs的逻辑是,加锁,写wal,commitlogs(写rocksdb),该方案的优点是不需要考虑并发问题,处理更简单;缺点就是,锁的颗粒度过大,影响了qps,为了提高qps,减少延迟,需要加少锁的颗粒度

问题点,为了保证数据的一致性:

1,需要保证多个commitlogs是顺序执行(相对顺序,不需要绝对顺序)

2,需要保证follower 变为leader对外开始提供服务的时候,不会再有未完成的 asynccommitlogs

一个follower一个 异步线程,单独用于处理commitlogs,单线程处理,就可以保证多个commitlogs是循序执行的,但是线上每台storage上有几百上千个part,线程数太多,cpu的分片就会越多,切换越频繁,对性能是负提升, 而且大部分线程基本处于空闲状态,浪费资源

所以处理线程选择线程池,

为了保证多个commitlogs是顺序执行的,简单的办法,就是保证同一个时刻有一个任务在执行,不加锁的情况下,使用原子变量方案来实现,典型的多生产者多消费者模型
apache::thrift::concurrency::ThreadManager = 高并发队列 + 线程池

@tangyuanzhang tangyuanzhang added the type/enhancement Type: make the code neat or more efficient label Sep 20, 2023
@wey-gu
Copy link
Contributor

wey-gu commented Sep 21, 2023

Why was this amazing proposal closed?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/enhancement Type: make the code neat or more efficient
Projects
None yet
Development

No branches or pull requests

2 participants