-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
storage: smarten GC of orphaned replicas of subsumed ranges #31570
Conversation
6f43d47
to
ac0fd2d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 2 of 2 files at r1.
Reviewable status: complete! 1 of 0 LGTMs obtained
pkg/storage/replica_gc_queue.go, line 264 at r1 (raw file):
} if leftReplyDesc := rs[0]; !leftDesc.Equal(leftReplyDesc) { if log.V(1) {
While you're here, can you make all the V(x)
s here into VEventf
s so that Alex' queue endpoint gets better output?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: complete! 1 of 0 LGTMs obtained
pkg/storage/replica_gc_queue.go, line 264 at r1 (raw file):
Previously, tschottdorf (Tobias Schottdorf) wrote…
While you're here, can you make all the
V(x)
s here intoVEventf
s so that Alex' queue endpoint gets better output?
Done.
pkg/storage/store_snapshot.go, line 453 at r4 (raw file):
} else { msg += "; initiated GC:" }
PTAL at this. I had to remove this to get the test to not be flaky. (See the commit message for details.) But this isn't just for tests: I've observed this stuck snapshot loop on real clusters.
I'm very comfortable backporting the change to the GC queue, but I'm a little bit more nervous about backporting this change. Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 3 of 3 files at r2, 2 of 2 files at r3, 1 of 1 files at r4.
Reviewable status: complete! 1 of 0 LGTMs obtained
pkg/storage/store_snapshot.go, line 453 at r4 (raw file):
Previously, benesch (Nikhil Benesch) wrote…
PTAL at this. I had to remove this to get the test to not be flaky. (See the commit message for details.) But this isn't just for tests: I've observed this stuck snapshot loop on real clusters.
I'm very comfortable backporting the change to the GC queue, but I'm a little bit more nervous about backporting this change. Thoughts?
LGTM. My context for these checks is that we are just always very careful to avoid a meta2 hotspot, but always based on principle, not because we knew it was a problem. Seeing how poorly this "careful" logic has performed it seems better to do something less careful but more effective.
My only concern is with how often this code path can get hit. Can a tight snapshot loop starve out other replica GC attempts because of the higher priority here? ISTM that we should lower the priority to standard, at least after the first attempt (or based on a heuristic that would work for non-quiesced replicas).
BTW, there's another place in the code where we add to the GC queue with its own heuristic (raft precandidate/candidate state, I think). Mind checking that location as well to make sure it works as intended?
Also, a comment could be valuable here for future readers -- in what situations are we expected to overlap an existing replica, and which of those actually don't eventually result in that replica to be gc'ed?
You're right that this is an unfortunate change at this point in the cycle, but on the other hand, now that we know how broken the existing heuristic is, what good is keeping that one? Perhaps you can soften the impact by keeping the old heuristic for the higher priority and using the default priority instead otherwise. Along with a comment to improve further at a later point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: complete! 1 of 0 LGTMs obtained
pkg/storage/store_snapshot.go, line 453 at r4 (raw file):
Previously, tschottdorf (Tobias Schottdorf) wrote…
LGTM. My context for these checks is that we are just always very careful to avoid a meta2 hotspot, but always based on principle, not because we knew it was a problem. Seeing how poorly this "careful" logic has performed it seems better to do something less careful but more effective.
My only concern is with how often this code path can get hit. Can a tight snapshot loop starve out other replica GC attempts because of the higher priority here? ISTM that we should lower the priority to standard, at least after the first attempt (or based on a heuristic that would work for non-quiesced replicas).
BTW, there's another place in the code where we add to the GC queue with its own heuristic (raft precandidate/candidate state, I think). Mind checking that location as well to make sure it works as intended?
Also, a comment could be valuable here for future readers -- in what situations are we expected to overlap an existing replica, and which of those actually don't eventually result in that replica to be gc'ed?
You're right that this is an unfortunate change at this point in the cycle, but on the other hand, now that we know how broken the existing heuristic is, what good is keeping that one? Perhaps you can soften the impact by keeping the old heuristic for the higher priority and using the default priority instead otherwise. Along with a comment to improve further at a later point.
Downgrading the priority is a good idea. There are two ways we can decide to downgrade (which you mentioned in passing above). The first is to downgrade if the replica has been GC'd in the last, say, ten seconds. This will require another tiny refactor to the replica GC queue, since it doesn't record that a GC has occurred in all cases. (Sigh.) The other is to use some as-yet unknown inactivity heuristic. I'm not sure what that would be. Do you or @ben have a suggestion?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: complete! 1 of 0 LGTMs obtained
pkg/storage/store_snapshot.go, line 453 at r4 (raw file):
Previously, benesch (Nikhil Benesch) wrote…
Downgrading the priority is a good idea. There are two ways we can decide to downgrade (which you mentioned in passing above). The first is to downgrade if the replica has been GC'd in the last, say, ten seconds. This will require another tiny refactor to the replica GC queue, since it doesn't record that a GC has occurred in all cases. (Sigh.) The other is to use some as-yet unknown inactivity heuristic. I'm not sure what that would be. Do you or @ben have a suggestion?
I'd tie the priority to the inactive
check that was there: If it is inactive, enqueue the range at replicaGCPriorityCandidate
. If it's not, use a lower priority.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 3 of 3 files at r5, 2 of 2 files at r6, 1 of 1 files at r7.
Reviewable status: complete! 1 of 0 LGTMs obtained
pkg/storage/store_snapshot.go, line 453 at r4 (raw file):
Previously, bdarnell (Ben Darnell) wrote…
I'd tie the priority to the
inactive
check that was there: If it is inactive, enqueue the range atreplicaGCPriorityCandidate
. If it's not, use a lower priority.
Agreed.
If a store receives a snapshot that overlaps an existing replica, we take it as a sign that the local replica may no longer be a member of its range and queue it for processing in the replica GC queue. When this code was added (cockroachdb#10426), the replica GC queue was quite aggressive about processing replicas, and so the implementation was careful to only queue a replica if it looked "inactive." Unfortunately, this inactivity check rotted when epoch-based leases were introduced a month later (cockroachdb#10305). An inactive replica with an epoch-based lease can incorrectly be considered active, even if all of the other members of the range have stopped sending it messages, because the epoch-based lease will continue to be heartbeated by the node itself. (With an expiration-based lease, the replica's local copy of the lease would quickly expire if the other members of the range stopped sending it messages.) Fixing the inactivity check to work with epoch-based leases is rather tricky. Quiescent replicas are nearly indistinguishable from abandoned replicas. This commit just removes the inactivity check and unconditionally queues replicas for GC if they intersect an incoming snapshot. The replica GC queue check is relatively cheap (one or two meta2 lookups), and overlapping snapshot situations are not expected to last for very long. Release note: None
When a range is subsumed, there are two paths by which its replicas can be cleaned up. The first path is that the subsuming replica, when it applies the merge trigger, removes the subsumed replica. This is the common case, as all replicas are collocated when the merge transaction starts. The second path is that the subumed replica is later cleaned up by the replica GC queue. This occurs when the subsuming range is rebalanced away shortly after the merge and so never applies the merge trigger, "orphaning" the subsuming replica. The replica GC queue must be careful to never to GC a replica that could be subsumed. If it discovers that a merge occurred, it needs to "prove" that the replica is actually orphaned. It does so by checking whether the left neighbor's local descriptor matches the meta2 descriptor; if it does not, the left neighbor is out of date and could possibly still apply a merge trigger, so the replica cannot be GC'd. Unfortunately, the replica GC queue tried to be too clever: it assumed such a proof was not necessary if the store was still a member of the subsuming range. Concretely, suppose adjacent ranges A and B merge, and store 2's replica of B is orphaned. When the replica GC queue looks up B's descriptor in meta2, it will get the descriptor for the combined range AB instead and correctly infer that a merge occurred. It also assumed that, because AB is listed as having a replica on store2, that the merge must be applying soon. This assumption was wrong. Suppose the merged range AB immediately splits back into A and B. The replica GC queue, considering store 2's replica of the new B, will, again, correctly infer that a merge took place (even though the descriptor it fetches from meta2 will have the same start and end key as its local descriptor, it will have a new range ID), but now its assumption that a replica of A must exist on the store is incorrect! A may have been rebalanced away, in which case we *must* GC the old copy of B, or the store will never be able to accept a snapshot for the new copy of B. This scenario was observed in several real clusters, and easily reproduces when restoring TPC-C. The fix is simple: teach the replica GC queue to always perform the proof when a range has been merged away. Attempting to be clever just to save one meta2 lookup was a bad idea. Touches cockroachdb#31409. Release note: None
Convert the log.V calls to log.VEvent calls in the replica GC queue so that they show up in the debug enqueue page. Release note: None
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well fixing that flaky test was miserable. Making the unreliable Raft handler a little more unreliable did the trick. Now the test is the only thing that tries to GC the replica since it doesn't get a ReplicaTooOld error from the remaining members of the range.
bors r=tschottdorf,bdarnell
Reviewable status: complete! 1 of 0 LGTMs obtained
31570: storage: smarten GC of orphaned replicas of subsumed ranges r=tschottdorf,bdarnell a=benesch When a range is subsumed, there are two paths by which its replicas can be cleaned up. The first path is that the subsuming replica, when it applies the merge trigger, removes the subsumed replica. This is the common case, as all replicas are collocated when the merge transaction starts. The second path is that the subumed replica is later cleaned up by the replica GC queue. This occurs when the subsuming range is rebalanced away shortly after the merge and so never applies the merge trigger, "orphaning" the subsuming replica. The replica GC queue must be careful to never to GC a replica that could be subsumed. If it discovers that a merge occurred, it needs to "prove" that the replica is actually orphaned. It does so by checking whether the left neighbor's local descriptor matches the meta2 descriptor; if it does not, the left neighbor is out of date and could possibly still apply a merge trigger, so the replica cannot be GC'd. Unfortunately, the replica GC queue tried to be too clever: it assumed such a proof was not necessary if the store was still a member of the subsuming range. Concretely, suppose adjacent ranges A and B merge, and store 2's replica of B is orphaned. When the replica GC queue looks up B's descriptor in meta2, it will get the descriptor for the combined range AB instead and correctly infer that a merge occurred. It also assumed that, because AB is listed as having a replica on store2, that the merge must be applying soon. This assumption was wrong. Suppose the merged range AB immediately splits back into A and B. The replica GC queue, considering store 2's replica of the new B, will, again, correctly infer that a merge took place (even though the descriptor it fetches from meta2 will have the same start and end key as its local descriptor, it will have a new range ID), but now its assumption that a replica of A must exist on the store is incorrect! A may have been rebalanced away, in which case we *must* GC the old copy of B, or the store will never be able to accept a snapshot for the new copy of B. This scenario was observed in several real clusters, and easily reproduces when restoring TPC-C. The fix is simple: teach the replica GC queue to always perform the proof when a range has been merged away. Attempting to be clever just to save one meta2 lookup was a bad idea. Touches #31409. Release note: None /cc @andreimatei Co-authored-by: Nikhil Benesch <[email protected]>
Build succeeded |
When a range is subsumed, there are two paths by which its replicas can
be cleaned up. The first path is that the subsuming replica, when it
applies the merge trigger, removes the subsumed replica. This is the
common case, as all replicas are collocated when the merge transaction
starts.
The second path is that the subumed replica is later cleaned up by the
replica GC queue. This occurs when the subsuming range is rebalanced
away shortly after the merge and so never applies the merge trigger,
"orphaning" the subsuming replica.
The replica GC queue must be careful to never to GC a replica that could
be subsumed. If it discovers that a merge occurred, it needs to "prove"
that the replica is actually orphaned. It does so by checking whether
the left neighbor's local descriptor matches the meta2 descriptor; if it
does not, the left neighbor is out of date and could possibly still
apply a merge trigger, so the replica cannot be GC'd.
Unfortunately, the replica GC queue tried to be too clever: it assumed
such a proof was not necessary if the store was still a member of the
subsuming range. Concretely, suppose adjacent ranges A and B merge, and
store 2's replica of B is orphaned. When the replica GC queue looks
up B's descriptor in meta2, it will get the descriptor for the combined
range AB instead and correctly infer that a merge occurred. It also
assumed that, because AB is listed as having a replica on store2, that
the merge must be applying soon.
This assumption was wrong. Suppose the merged range AB immediately
splits back into A and B. The replica GC queue, considering store 2's
replica of the new B, will, again, correctly infer that a merge took
place (even though the descriptor it fetches from meta2 will have the
same start and end key as its local descriptor, it will have a new range
ID), but now its assumption that a replica of A must exist on the store
is incorrect! A may have been rebalanced away, in which case we must
GC the old copy of B, or the store will never be able to accept a
snapshot for the new copy of B.
This scenario was observed in several real clusters, and easily
reproduces when restoring TPC-C.
The fix is simple: teach the replica GC queue to always perform the
proof when a range has been merged away. Attempting to be clever just to
save one meta2 lookup was a bad idea.
Touches #31409.
Release note: None
/cc @andreimatei