Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Second Solution] Fix the potential data loss for clusters with only one member (simpler solution) #14400

Merged
merged 1 commit into from
Sep 5, 2022

Conversation

ahrtr
Copy link
Member

@ahrtr ahrtr commented Aug 30, 2022

Second solution to fix #14370

This solution is based on the following feedback,

  1. Durability API guarantee broken in single node cluster #14370 (comment) from @hasethuraman
  2. Fix the potential data loss for clusters with only one member #14394 (comment) from @lavacat
  3. Durability API guarantee broken in single node cluster #14370 (comment) from @ptabor

I compared the performance between this PR and #14394 for one-member cluster , overall #14394 is a little better than this one (about 2.7% higher than this one). But this PR is much simpler; excluding the test and comment, this PR only has about 20 lines of code change.

cc @serathius @spzala @ptabor @liggitt @dims

@ahrtr
Copy link
Member Author

ahrtr commented Aug 30, 2022

The pipeline failures are caused by 70de5c8. I just delivered another PR #14401 to fix it.

For a cluster with only one member, the raft always send identical
unstable entries and committed entries to etcdserver, and etcd
responds to the client once it finishes (actually partially) the
applying workflow.

When the client receives the response, it doesn't mean etcd has already
successfully saved the data, including BoltDB and WAL, because:
   1. etcd commits the boltDB transaction periodically instead of on each request;
   2. etcd saves WAL entries in parallel with applying the committed entries.
Accordingly, it may run into a situation of data loss when the etcd crashes
immediately after responding to the client and before the boltDB and WAL
successfully save the data to disk.
Note that this issue can only happen for clusters with only one member.

For clusters with multiple members, it isn't an issue, because etcd will
not commit & apply the data before it being replicated to majority members.
When the client receives the response, it means the data must have been applied.
It further means the data must have been committed.
Note: for clusters with multiple members, the raft will never send identical
unstable entries and committed entries to etcdserver.

Signed-off-by: Benjamin Wang <[email protected]>
@ahrtr ahrtr force-pushed the one_member_data_loss_raft branch from b5c5455 to 2a10049 Compare August 30, 2022 07:29
@ahrtr ahrtr changed the title Fix the potential data loss for clusters with only one member (Second solution) Fix the potential data loss for clusters with only one member (simpler solution) Aug 30, 2022
@ahrtr
Copy link
Member Author

ahrtr commented Aug 30, 2022

I suggest to cherry pick this PR or #14394 to 3.5 and 3.4.

We can continue to enhance the raft package implementation only in main branch.

@ahrtr ahrtr changed the title Fix the potential data loss for clusters with only one member (simpler solution) [Second Solution] Fix the potential data loss for clusters with only one member (simpler solution) Aug 31, 2022
@ahrtr ahrtr mentioned this pull request Aug 31, 2022
Copy link
Member

@serathius serathius left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like the best intermediate solution for etcdserver as proposed in #14370 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

Durability API guarantee broken in single node cluster
3 participants