-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kvserver: improve Raft append behavior when follower is missing log entries #113053
Labels
A-kv-observability
A-kv-replication
Relating to Raft, consensus, and coordination.
C-enhancement
Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception)
C-investigation
Further steps needed to qualify. C-label will change.
C-performance
Perf of queries or internals. Solution not expected to change functional behavior.
T-kv
KV Team
Comments
erikgrinaker
added
C-enhancement
Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception)
C-investigation
Further steps needed to qualify. C-label will change.
C-performance
Perf of queries or internals. Solution not expected to change functional behavior.
A-kv-replication
Relating to Raft, consensus, and coordination.
A-kv-observability
T-kv-replication
labels
Oct 25, 2023
cc @cockroachdb/replication |
This was referenced Oct 25, 2023
sumeerbhola
added a commit
to sumeerbhola/cockroach
that referenced
this issue
Oct 27, 2023
The raft.storage.error metric is incremented on an error, and the error is logged every 30s (across all replicas). This was motivated by a test cluster that slowed to a crawl because of deliberate data loss, but was hard to diagnose. The metric could be used for alerting, since we don't expect to see transient errors. Informs cockroachdb#113053 Epic: none Release note: None
sumeerbhola
added a commit
to sumeerbhola/cockroach
that referenced
this issue
Nov 1, 2023
The raft.storage.error metric is incremented on an error, and the error is logged every 30s (across all replicas). This was motivated by a test cluster that slowed to a crawl because of deliberate data loss, but was hard to diagnose. The metric could be used for alerting, since we don't expect to see transient errors. Informs cockroachdb#113053 Epic: none Release note: None
craig bot
pushed a commit
that referenced
this issue
Nov 6, 2023
112078: roachtest: clean up command-line flags r=RaduBerinde a=RaduBerinde **This PR is only for the last commit. The rest are #111811** --- #### roachtest: clean up command-line flags The code around command-line flags is pretty messy: they're defined in many places; the name and description of a flag are far away from the variable; the variable names look like local variables and in many cases it's not obvious we're accessing a global. This commit moves all flags to a separate subpackage, `roachtestflags`, making all uses of global flags obvious. We also add a bit of infrastructure to allow defining all information about a flag right next to where the variable is declared. We also provide a `Changed()` function that determines if a flag value was changed (without the caller having to use the Command or even the flag name). There should be no functional changes (just some cosmetic improvements to the flag usage texts). Epic: none Release note: None 113245: kvserver: add metric and log when raft.Storage returns an error r=erikgrinaker a=sumeerbhola The raft.storage.error metric is incremented on an error, and the error is logged every 30s (across all replicas). This was motivated by a test cluster that slowed to a crawl because of deliberate data loss, but was hard to diagnose. The metric could be used for alerting, since we don't expect to see transient errors. Informs #113053 Epic: none Release note: None 113335: kvpb: delete ErrorDetail message r=nvanbenschoten a=nvanbenschoten This was unused, so delete it. The message has been unused since 0c12f6c. Epic: None Release note: None 113636: concurrency: recompute wait queues when locking requests drop out r=nvanbenschoten a=arulajmani First commit from #112732 ---- A locking request must actively wait in a lock's wait queues if: - it conflicts with any of the lock holders. - or it conflicts with a lower sequence numbered request already in the lock's wait queue. As a result, if a locking request exits a lock's wait queue without actually acquiring the lock, it may allow other locking requests to proceed. This patch recomputes wait queues whenever a locking request exits a lock's wait queues to detect such scenarios and unblock requests which were actively waiting previously not no longer need to. Fixes #111144 Release note: None Co-authored-by: Radu Berinde <[email protected]> Co-authored-by: sumeerbhola <[email protected]> Co-authored-by: Nathan VanBenschoten <[email protected]> Co-authored-by: Arul Ajmani <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
A-kv-observability
A-kv-replication
Relating to Raft, consensus, and coordination.
C-enhancement
Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception)
C-investigation
Further steps needed to qualify. C-label will change.
C-performance
Perf of queries or internals. Solution not expected to change functional behavior.
T-kv
KV Team
In a test cluster, where
kv.raft_log.synchronization.unsafe.disabled
was enabled (i.e. process crashes will lose Raft data), we saw a range become unavailable, stalling the workload. CPU was persistently spinning at 100% on the Raft leader (n2) and one follower (n6), with the following CPU profiles:n2 (leader):
n6 (follower):
Notice the call to
raftLog.findConflictByTerm()
on the leader. This call only happens when the follower rejected aMsgApp
, viaMsgAppResp
withReject: true
:https://github.com/etcd-io/raft/blob/ee0fe9da492888b55fe183cf1a42931ad551ec6b/raft.go#L1339-L1459
This happens when the follower fails to append a set of log entries, e.g. because the follower is lacking a prefix of the log:
https://github.com/etcd-io/raft/blob/ee0fe9da492888b55fe183cf1a42931ad551ec6b/raft.go#L1738-L1770
The logic here also involves silently swallowing an error when attempting to read the term from storage (although we can't have hit this on the follower, because that would return a 0 term hint to the leader who would then not call
findConflictByTerm()
):https://github.com/etcd-io/raft/blob/1df762940b8c309a27cfafb086d767c0c7e3f58f/log.go#L180-L187
It seems plausible that data loss induced by
kv.raft_log.synchronization.unsafe.disabled
somehow ended up with either an append loop or hitting a slow path (e.g. there is a fallback here to probing indexes one by one, although it does not seem like we hit it here), where the leader continually sends MsgApps to the follower, who in turn rejects them.We should make sure the behavior here is sound, and improve observability when this happens.
Jira issue: CRDB-32732
Epic CRDB-39898
The text was updated successfully, but these errors were encountered: