-
Notifications
You must be signed in to change notification settings - Fork 9.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Fourth solution] Fix the potential data loss for clusters with only one member (raft layer change) #14411
[Fourth solution] Fix the potential data loss for clusters with only one member (raft layer change) #14411
Conversation
83bada5
to
cf9306b
Compare
Codecov Report
@@ Coverage Diff @@
## main #14411 +/- ##
==========================================
- Coverage 75.56% 75.26% -0.30%
==========================================
Files 457 458 +1
Lines 37183 37202 +19
==========================================
- Hits 28098 28001 -97
- Misses 7335 7432 +97
- Partials 1750 1769 +19
Flags with carried forward coverage won't be shown. Click here to find out more.
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
cf9306b
to
d1957fe
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you. This change looks good to me.
The fixed tests are huge value, irregardless whether we will go with Step or the in-line approach.
0773176
to
a799416
Compare
For a cluster with only one member, the raft always send identical unstable entries and committed entries to etcdserver, and etcd responds to the client once it finishes (actually partially) the applying workflow. When the client receives the response, it doesn't mean etcd has already successfully saved the data, including BoltDB and WAL, because: 1. etcd commits the boltDB transaction periodically instead of on each request; 2. etcd saves WAL entries in parallel with applying the committed entries. Accordingly, it may run into a situation of data loss when the etcd crashes immediately after responding to the client and before the boltDB and WAL successfully save the data to disk. Note that this issue can only happen for clusters with only one member. For clusters with multiple members, it isn't an issue, because etcd will not commit & apply the data before it being replicated to majority members. When the client receives the response, it means the data must have been applied. It further means the data must have been committed. Note: for clusters with multiple members, the raft will never send identical unstable entries and committed entries to etcdserver. Signed-off-by: Benjamin Wang <[email protected]>
1. added one more command "report-status" so that the leader can acknowledges that the entries has already been persisted. 2. regenerated some test data. Signed-off-by: Benjamin Wang <[email protected]>
a799416
to
e60cb56
Compare
The fourth solution to fix #14370