-
Notifications
You must be signed in to change notification settings - Fork 169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions about log probe #72
Comments
Yes. This behaviour is "correct" either way. But there are options to save some bandwidth, as you point out, depending on assumptions. I think the current strategy optimistically assumes that the first probe will succeed, and in this case we will save one roundtrip of latency. I can think of 2 cases when this probing happens:
What you're suggesting would be best for the case (1). The current strategy is better for case (2) in terms of replication latency. There is no obviously always-better option, it's a trade-off. It's hard (but maybe not impossible) to distinguish between case (1) and (2) on the leader end either, to make this decision dynamic. So we stick to the optimistic, I guess. I don't have data though, to support the argument that this optimistic approach is best on average. I think it largely depends on the deployment. |
Btw, see a related broader-scope issue #64 - this and similar user/workload/deployment-dependent flow control aspects could be delegated to the upper layer, and not necessarily hardcoded in |
Thanks for your reply, close this issue. |
Look at the following piece of code, the test at line will make raft send at most
maxMsgSize
bytes of entries when raft is in probe state, ie.pr.State != tracker.StateProbe
:raft/raft.go
Lines 565 to 585 in 3e6cb62
I have two questions:
maxMsgSize
bytes of entries when raft is in probe state)?pr.State != tracker.StateProbe
? Since in a probe state it's very likely for a append message to be rejected, sending just one or zero entry might accelerate the probe process.@ahrtr @pavelkalinnikov
The text was updated successfully, but these errors were encountered: