Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

raft: never remove the last voter #10884

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 13 additions & 9 deletions raft/raft_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -1139,10 +1139,16 @@ func TestCommit(t *testing.T) {
storage.Append(tt.logs)
storage.hardState = pb.HardState{Term: tt.smTerm}

sm := newTestRaft(1, []uint64{1}, 10, 2, storage)
sm.prs.RemoveAny(1)
var ids []uint64
for j := 0; j < len(tt.matches); j++ {
sm.prs.InitProgress(uint64(j)+1, tt.matches[j], tt.matches[j]+1, false)
ids = append(ids, uint64(j+1))
}
sm := newTestRaft(1, ids, 10, 2, storage)

for j := 0; j < len(tt.matches); j++ {
id := uint64(j + 1)
sm.prs.Progress[id].Match = tt.matches[j]
sm.prs.Progress[id].Next = tt.matches[j] + 1
}
sm.maybeCommit()
if g := sm.raftLog.committed; g != tt.w {
Expand Down Expand Up @@ -3142,9 +3148,8 @@ func TestRemoveNode(t *testing.T) {
t.Errorf("nodes = %v, want %v", g, w)
}

// remove all nodes from cluster
// The last remaining node will refuse to remove itself.
r.applyConfChange(pb.ConfChange{NodeID: 1, Type: pb.ConfChangeRemoveNode})
w = []uint64{}
if g := r.prs.VoterNodes(); !reflect.DeepEqual(g, w) {
t.Errorf("nodes = %v, want %v", g, w)
}
Expand All @@ -3160,14 +3165,13 @@ func TestRemoveLearner(t *testing.T) {
t.Errorf("nodes = %v, want %v", g, w)
}

w = []uint64{}
if g := r.prs.LearnerNodes(); !reflect.DeepEqual(g, w) {
if w, g := []uint64{}, r.prs.LearnerNodes(); !reflect.DeepEqual(g, w) {
t.Errorf("nodes = %v, want %v", g, w)
}

// remove all nodes from cluster
// The remaining voter will refuse to remove itself.
r.applyConfChange(pb.ConfChange{NodeID: 1, Type: pb.ConfChangeRemoveNode})
if g := r.prs.VoterNodes(); !reflect.DeepEqual(g, w) {
if w, g := []uint64{1}, r.prs.VoterNodes(); !reflect.DeepEqual(g, w) {
t.Errorf("nodes = %v, want %v", g, w)
}
}
Expand Down
5 changes: 5 additions & 0 deletions raft/tracker/tracker.go
Original file line number Diff line number Diff line change
Expand Up @@ -157,6 +157,11 @@ func (p *ProgressTracker) RemoveAny(id uint64) {
panic(fmt.Sprintf("peer %x is both voter and learner", id))
}

if okV1 && len(p.Voters[0]) == 1 {
// Never remove the last voter.
return
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Silently returning here seems like a potential problem. Should this be a panic just like the other invalid removals?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I am doing in a WIP (unpublished) is returning an error from ApplyConfChange which essentially allows the app to delegate config change checking to Raft itself. I'm doing this since it becomes more difficult to reason about what's allowed and what isn't when there are joint quorums in play. Additionally, the way conf changes are set up - the app passing in a "delta" - further complicates this, the app basically has to grab the current config, then compute what the final config would be, and decide whether to actually pass the delta to Raft - that's quite a bit of work that's easy to get wrong.

In CRDB, we'll additionally want to do this on the CRDB side too to avoid diverging our descriptor-encoded config and our Raft-encoded one (today we just don't check), but the code Raft uses to "check" the transition will be modular and so we can compute the final config on our end that way, check the result against the descriptor, and then feed it to Raft (knowing there won't be an error).

I wanted to send out a small PR making just this change, and I'd prefer not to panic since it's all the same (at least this way I can test this).

We also already have an unfortunate history of ignoring config changes hackily, for example those issued in StartNode and those with a NodeID of None. I will see about unifying all that down the road, though it's not my top priority.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to send out a small PR making just this change, and I'd prefer not to panic since it's all the same (at least this way I can test this).

My thinking is that since there are already (untested) panics here it makes sense to do the same for this new case, but I don't feel strongly about it (and you could still test it with RawNode by catching the panic). If you've got other cleanups coming this is fine.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we probably should document this behavior at least if we do not panic here.

}

delete(p.Voters[0], id)
delete(p.Voters[1], id)
delete(p.Learners, id)
Expand Down