-
Notifications
You must be signed in to change notification settings - Fork 468
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Technical advisory for race condition between long running reads and …
…replica removal.
- Loading branch information
Showing
2 changed files
with
45 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,41 @@ | ||
--- | ||
title: Technical Advisory 64325 | ||
summary: Race condition between long running reads and replica removal | ||
toc: true | ||
--- | ||
|
||
Publication date: May 3, 2021 | ||
|
||
## Description | ||
|
||
Cockroach Labs has discovered a race condition, where a long running read request submitted during replica removal may be evaluated on the removed replica, returning an empty result. Common causes of replica removal are range merges and replica rebalancing in the cluster. When a replica is removed there is a small window, where the data is already deleted but a read request will still see it as a valid replica. In this case a read request will always return an empty result for the part of the query that relied on this range. | ||
|
||
This is a very rare scenario that can only be encountered, if a number of very unlikely conditions are met: | ||
|
||
- A read request checks that the node is holding the lease and begins processing | ||
- The node loses the lease before the read request is able to lock the replica for reading | ||
- The read request checks the state of the replica and makes sure it is alive | ||
- The new leaseholder decides to immediately removes the old leaseholder replica from the range, due to a rebalance or range merge. | ||
- The removal is processed before the read requests starts reading the data from the replica | ||
- The read request observes the replica in a post-deletion state returning an empty result set. | ||
|
||
Due to the nature of this race condition, we believe that it is only likely for this to happen on an extremely overloaded node that is struggling to keep up with processing requests and experiences natural delays between each step in the processing of a read request. | ||
|
||
This issue affects CockroachDB [versions](/docs/releases/) 19.2 and later. | ||
|
||
## Statement | ||
This is resolved in CockroachDB by PR [#64324], which fix the race condition between read requests and replica removal. | ||
|
||
The fix has been applied to maintenance releases of CockroachDB v20.1, v20.2, and v21.1. | ||
|
||
This public issue is tracked by [#64325]. | ||
|
||
## Mitigation | ||
|
||
Users of CockroachDB v19.2, v20.1, or v20.2 are invited to upgrade to v20.1.16, v20.2.9, or a later version. | ||
|
||
This issue may impact the backups of CockroachDB. You can take the following steps to validate backups to make sure they are correct and complete. TODO: @mattsherman. | ||
|
||
## Impact | ||
|
||
This issue may impact all read operations, while the window of opportunity to see is very limited it may have happened to without your knowledge. As read operations include backup and incremental backup, if your cluster has experienced overloaded conditions we encourage you to verify your backups using the instructions provided above. |