-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sqlstats: PersistedSQLStats.Stop()
blocking server drain
#102574
Labels
A-sql-observability
Related to observability of the SQL layer
branch-master
Failures and bugs on the master branch.
branch-release-23.1
Used to mark GA and release blockers, technical advisories, and bugs for 23.1
C-bug
Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior.
GA-blocker
O-testcluster
Issues found or occurred on a test cluster, i.e. a long-running internal cluster
Comments
erikgrinaker
added
C-bug
Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior.
A-sql-observability
Related to observability of the SQL layer
O-testcluster
Issues found or occurred on a test cluster, i.e. a long-running internal cluster
labels
Apr 28, 2023
Other goroutines in
|
The fact that we've been waiting 2815 minutes to acquire a mutex in |
Full goroutine dump: goroutines.txt |
knz
added
branch-master
Failures and bugs on the master branch.
GA-blocker
branch-release-23.1
Used to mark GA and release blockers, technical advisories, and bugs for 23.1
branch-release-23.1.0
labels
Apr 28, 2023
knz
added a commit
to knz/cockroach
that referenced
this issue
Apr 28, 2023
Prior to this change, the coordination between the stats flusher task (an async stopper task) and the activity flusher job was performed using a two-step process: - the stats persistence task offered to call a callback _function_ every time a flush would complete. - the job would _reconfigure the callback function_ on each iteration. - the function was writing to a channel that was subsequently read by the job iteration body. This approach was defective in 3 ways: 1. if the job iteration body would exit (e.g. due to a server drain) *after* it installed the callback fn, but *before* the stats flusher would read and call the callback fn, a window of time existed where a deadlock could occur: - the stats flusher retrieves the pointer to the caller fn but doesn't call it yet. - the job loop exits. From then on it will not read from the channel any more. - the stats flusher attempts to write to the channel. A deadlock occurs. (This was seen during testing. See cockroachdb#102574) The fix here is to always jointly `select` the write to the channel and also a read from the drain/stopper signals, to abort the channel operation if a shutdown is requested. 2. the stats flusher task was holding the mutex locked while performing the channel write. This is generally bad code hygiene as it forces the code maintainer to double-check whether the lock and channel operations don't mutually interlock. The fix is to use the mutex to retrieve the channel reference, and then write to the channel while the mutex is not held any more. 3. the API between the two was defining a *callback function* where really just a notification channel was needed. The fix here is to simplify the API. Release note: None
craig bot
pushed a commit
that referenced
this issue
Apr 28, 2023
100181: kv: Use strict types for common fields r=erikgrinaker a=andrewbaptist This PR introduces 3 new typed fields in mvcc.go: RaftTerm, RaftIndex and LeaseSequence. These fields were previously just unit64 throughout the code and this made the code harder to read and risked incorrect conversions. Epic: none Release note: None 102407: kvserver: check PROSCRIBED lease status over UNUSABLE r=erikgrinaker,tbg a=pavelkalinnikov The PROSCRIBED lease status, just like EXPIRED, puts a lease to a definitely invalid state. The UNUSABLE state (when request timestamp is in stasis period) is less of a clear cut: we still own the lease but callers may use or not use it depending on context. For example, the closed timestamp side-transport ignores the UNUSABLE state (because we still own the lease), and takes it as usable for its purposes. Because of the order in which the checks were made, this has lead to a bug: a PROSCRIBED lease is reported as UNUSABLE during stasis periods, the closed timestamp side-transport then considers it usable, and updates closed timestamps when it shouldn't. This commit fixes the bug by swapping the order of checks in the leaseStatus method. The order now goes from "hard" checks like EXPIRED and PROSCRIBED, to "softer" UNUSABLE, and (when the softness is put to the limit) VALID. Fixes #98698 Fixes #99931 Fixes #100101 Epic: none Release note (bug fix): a bug is fixed in closed timestamp updates within its side-transport. Previously, during asymmetric partitions, a node that transfers a lease away, and misses a liveness heartbeat, could then erroneously update the closed timestamp during the stasis period of its liveness. This could lead to closed timestamp invariant violation, and node crashes; in extreme cases this could lead to inconsistencies in read-only queries. 102503: concurrency: do not partition locks in the lock table by span scope r=nvanbenschoten a=arulajmani This patch is entirely a refactor and does not change any functionality. This is done in preparation for introducing `LockSpanSets` to track lock spans, which do not make a distinction between global and local keys (unlike `SpanSets`, which do). The main changes here are in `lockTableImpl`, which actually stores locks, and `lockTableGuardImpl` which snapshots the lock table. We no longer make a distinction between locks on Local and Global keys when storing them. The majority of this diff is composed of test file churn caused because of the printing changes to the lock table. Informs #102008 Release note: None 102590: sql,persistedsqlstats: prevent a deadlock during shutdown r=j82w a=knz Fixes #102574. Prior to this change, the coordination between the stats flusher task (an async stopper task) and the activity flusher job was performed using a two-step process: - the stats persistence task offered to call a callback _function_ every time a flush would complete. - the job would _reconfigure the callback function_ on each iteration. - the function was writing to a channel that was subsequently read by the job iteration body. This approach was defective in 3 ways: 1. if the job iteration body would exit (e.g. due to a server drain) *after* it installed the callback fn, but *before* the stats flusher would read and call the callback fn, a window of time existed where a deadlock could occur: - the stats flusher retrieves the pointer to the caller fn but doesn't call it yet. - the job loop exits. From then on it will not read from the channel any more. - the stats flusher attempts to write to the channel. A deadlock occurs. (This was seen during testing. See #102574) The fix here is to always jointly `select` the write to the channel and also a read from the drain/stopper signals, to abort the channel operation if a shutdown is requested. 2. the stats flusher task was holding the mutex locked while performing the channel write. This is generally bad code hygiene as it forces the code maintainer to double-check whether the lock and channel operations don't mutually interlock. The fix is to use the mutex to retrieve the channel reference, and then write to the channel while the mutex is not held any more. 3. the API between the two was defining a *callback function* where really just a notification channel was needed. The fix here is to simplify the API. Release note: None Co-authored-by: Andrew Baptist <[email protected]> Co-authored-by: Pavel Kalinnikov <[email protected]> Co-authored-by: Arul Ajmani <[email protected]> Co-authored-by: Raphael 'kena' Poss <[email protected]>
blathers-crl bot
pushed a commit
that referenced
this issue
Apr 28, 2023
Prior to this change, the coordination between the stats flusher task (an async stopper task) and the activity flusher job was performed using a two-step process: - the stats persistence task offered to call a callback _function_ every time a flush would complete. - the job would _reconfigure the callback function_ on each iteration. - the function was writing to a channel that was subsequently read by the job iteration body. This approach was defective in 3 ways: 1. if the job iteration body would exit (e.g. due to a server drain) *after* it installed the callback fn, but *before* the stats flusher would read and call the callback fn, a window of time existed where a deadlock could occur: - the stats flusher retrieves the pointer to the caller fn but doesn't call it yet. - the job loop exits. From then on it will not read from the channel any more. - the stats flusher attempts to write to the channel. A deadlock occurs. (This was seen during testing. See #102574) The fix here is to always jointly `select` the write to the channel and also a read from the drain/stopper signals, to abort the channel operation if a shutdown is requested. 2. the stats flusher task was holding the mutex locked while performing the channel write. This is generally bad code hygiene as it forces the code maintainer to double-check whether the lock and channel operations don't mutually interlock. The fix is to use the mutex to retrieve the channel reference, and then write to the channel while the mutex is not held any more. 3. the API between the two was defining a *callback function* where really just a notification channel was needed. The fix here is to simplify the API. Release note: None
blathers-crl bot
pushed a commit
that referenced
this issue
Apr 28, 2023
Prior to this change, the coordination between the stats flusher task (an async stopper task) and the activity flusher job was performed using a two-step process: - the stats persistence task offered to call a callback _function_ every time a flush would complete. - the job would _reconfigure the callback function_ on each iteration. - the function was writing to a channel that was subsequently read by the job iteration body. This approach was defective in 3 ways: 1. if the job iteration body would exit (e.g. due to a server drain) *after* it installed the callback fn, but *before* the stats flusher would read and call the callback fn, a window of time existed where a deadlock could occur: - the stats flusher retrieves the pointer to the caller fn but doesn't call it yet. - the job loop exits. From then on it will not read from the channel any more. - the stats flusher attempts to write to the channel. A deadlock occurs. (This was seen during testing. See #102574) The fix here is to always jointly `select` the write to the channel and also a read from the drain/stopper signals, to abort the channel operation if a shutdown is requested. 2. the stats flusher task was holding the mutex locked while performing the channel write. This is generally bad code hygiene as it forces the code maintainer to double-check whether the lock and channel operations don't mutually interlock. The fix is to use the mutex to retrieve the channel reference, and then write to the channel while the mutex is not held any more. 3. the API between the two was defining a *callback function* where really just a notification channel was needed. The fix here is to simplify the API. Release note: None
cameronnunez
pushed a commit
to cameronnunez/cockroach
that referenced
this issue
May 2, 2023
Prior to this change, the coordination between the stats flusher task (an async stopper task) and the activity flusher job was performed using a two-step process: - the stats persistence task offered to call a callback _function_ every time a flush would complete. - the job would _reconfigure the callback function_ on each iteration. - the function was writing to a channel that was subsequently read by the job iteration body. This approach was defective in 3 ways: 1. if the job iteration body would exit (e.g. due to a server drain) *after* it installed the callback fn, but *before* the stats flusher would read and call the callback fn, a window of time existed where a deadlock could occur: - the stats flusher retrieves the pointer to the caller fn but doesn't call it yet. - the job loop exits. From then on it will not read from the channel any more. - the stats flusher attempts to write to the channel. A deadlock occurs. (This was seen during testing. See cockroachdb#102574) The fix here is to always jointly `select` the write to the channel and also a read from the drain/stopper signals, to abort the channel operation if a shutdown is requested. 2. the stats flusher task was holding the mutex locked while performing the channel write. This is generally bad code hygiene as it forces the code maintainer to double-check whether the lock and channel operations don't mutually interlock. The fix is to use the mutex to retrieve the channel reference, and then write to the channel while the mutex is not held any more. 3. the API between the two was defining a *callback function* where really just a notification channel was needed. The fix here is to simplify the API. Release note: None
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
A-sql-observability
Related to observability of the SQL layer
branch-master
Failures and bugs on the master branch.
branch-release-23.1
Used to mark GA and release blockers, technical advisories, and bugs for 23.1
C-bug
Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior.
GA-blocker
O-testcluster
Issues found or occurred on a test cluster, i.e. a long-running internal cluster
Seen on a 23.1 test cluster, drain stalls for several minutes until it times out:
Goroutine dumps show it blocked here:
Jira issue: CRDB-27539
The text was updated successfully, but these errors were encountered: