Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

storage: TestRaftRemoveRace failed under stress #16376

Closed
cockroach-teamcity opened this issue Jun 7, 2017 · 3 comments · Fixed by #16399 or #16413
Closed

storage: TestRaftRemoveRace failed under stress #16376

cockroach-teamcity opened this issue Jun 7, 2017 · 3 comments · Fixed by #16399 or #16413
Assignees
Labels
C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot.

Comments

@cockroach-teamcity
Copy link
Member

SHA: https://github.com/cockroachdb/cockroach/commits/e1ec2fd44f7cd23c1835c61e20070d64d2bf7f9c

Parameters:

TAGS=
GOFLAGS=-race

Stress build found a failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=266258&tab=buildLog

I170607 07:20:53.305339 39662 gossip/gossip.go:297  [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:42531" > attrs:<> locality:<> 
W170607 07:20:53.328098 39662 gossip/gossip.go:1196  [n2] no incoming or outgoing connections
I170607 07:20:53.332276 39818 gossip/client.go:131  [n2] started gossip client to 127.0.0.1:42531
I170607 07:20:53.353997 39662 gossip/gossip.go:297  [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:43377" > attrs:<> locality:<> 
W170607 07:20:53.370251 39662 gossip/gossip.go:1196  [n3] no incoming or outgoing connections
I170607 07:20:53.371951 39840 gossip/client.go:131  [n3] started gossip client to 127.0.0.1:42531
I170607 07:20:53.427217 39662 gossip/gossip.go:297  [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:45311" > attrs:<> locality:<> 
W170607 07:20:53.499241 39662 gossip/gossip.go:1196  [n4] no incoming or outgoing connections
I170607 07:20:53.515956 39974 gossip/client.go:131  [n4] started gossip client to 127.0.0.1:42531
I170607 07:20:53.585405 39662 gossip/gossip.go:297  [n4] NodeDescriptor set to node_id:4 address:<network_field:"tcp" address_field:"127.0.0.1:44690" > attrs:<> locality:<> 
W170607 07:20:53.605730 39662 gossip/gossip.go:1196  [n5] no incoming or outgoing connections
I170607 07:20:53.612203 40245 gossip/client.go:131  [n5] started gossip client to 127.0.0.1:42531
I170607 07:20:53.613768 40249 gossip/server.go:285  [n1] refusing gossip from node 5 (max 3 conns); forwarding to 3 ({tcp 127.0.0.1:45311})
I170607 07:20:53.621630 40245 gossip/client.go:136  [n5] closing client to node 1 (127.0.0.1:42531): received forward from node 1 to 3 (127.0.0.1:45311)
I170607 07:20:53.622051 40261 gossip/gossip.go:1210  [n5] node has connected to cluster via gossip
I170607 07:20:53.623023 40224 gossip/client.go:131  [n5] started gossip client to 127.0.0.1:45311
I170607 07:20:53.664186 39662 gossip/gossip.go:297  [n5] NodeDescriptor set to node_id:5 address:<network_field:"tcp" address_field:"127.0.0.1:32921" > attrs:<> locality:<> 
W170607 07:20:53.689761 39662 gossip/gossip.go:1196  [n6] no incoming or outgoing connections
I170607 07:20:53.693182 40371 gossip/client.go:131  [n6] started gossip client to 127.0.0.1:42531
I170607 07:20:53.693982 40389 gossip/server.go:285  [n1] refusing gossip from node 6 (max 3 conns); forwarding to 3 ({tcp 127.0.0.1:45311})
I170607 07:20:53.702066 39662 gossip/gossip.go:297  [n6] NodeDescriptor set to node_id:6 address:<network_field:"tcp" address_field:"127.0.0.1:33397" > attrs:<> locality:<> 
W170607 07:20:53.742558 39662 gossip/gossip.go:1196  [n7] no incoming or outgoing connections
I170607 07:20:53.746859 40371 gossip/client.go:136  [n6] closing client to node 1 (127.0.0.1:42531): received forward from node 1 to 3 (127.0.0.1:45311)
I170607 07:20:53.747350 40107 gossip/gossip.go:1210  [n6] node has connected to cluster via gossip
I170607 07:20:53.748050 40452 gossip/client.go:131  [n6] started gossip client to 127.0.0.1:45311
I170607 07:20:53.756747 39977 gossip/client.go:131  [n7] started gossip client to 127.0.0.1:42531
I170607 07:20:53.762139 40029 gossip/server.go:285  [n1] refusing gossip from node 7 (max 3 conns); forwarding to 4 ({tcp 127.0.0.1:44690})
I170607 07:20:53.774841 39977 gossip/client.go:136  [n7] closing client to node 1 (127.0.0.1:42531): received forward from node 1 to 4 (127.0.0.1:44690)
I170607 07:20:53.775243 40523 gossip/gossip.go:1210  [n7] node has connected to cluster via gossip
I170607 07:20:53.777074 39982 gossip/client.go:131  [n7] started gossip client to 127.0.0.1:44690
I170607 07:20:53.814305 39662 gossip/gossip.go:297  [n7] NodeDescriptor set to node_id:7 address:<network_field:"tcp" address_field:"127.0.0.1:36253" > attrs:<> locality:<> 
W170607 07:20:53.879576 39662 gossip/gossip.go:1196  [n8] no incoming or outgoing connections
I170607 07:20:53.883370 40503 gossip/client.go:131  [n8] started gossip client to 127.0.0.1:42531
I170607 07:20:53.892640 40586 gossip/server.go:285  [n1] refusing gossip from node 8 (max 3 conns); forwarding to 3 ({tcp 127.0.0.1:45311})
I170607 07:20:53.898531 40503 gossip/client.go:136  [n8] closing client to node 1 (127.0.0.1:42531): received forward from node 1 to 3 (127.0.0.1:45311)
I170607 07:20:53.898969 40126 gossip/gossip.go:1210  [n8] node has connected to cluster via gossip
I170607 07:20:53.900751 40730 gossip/client.go:131  [n8] started gossip client to 127.0.0.1:45311
I170607 07:20:53.927243 39662 gossip/gossip.go:297  [n8] NodeDescriptor set to node_id:8 address:<network_field:"tcp" address_field:"127.0.0.1:38514" > attrs:<> locality:<> 
W170607 07:20:53.961406 39662 gossip/gossip.go:1196  [n9] no incoming or outgoing connections
I170607 07:20:53.963285 40857 gossip/client.go:131  [n9] started gossip client to 127.0.0.1:42531
I170607 07:20:53.972278 40867 gossip/server.go:285  [n1] refusing gossip from node 9 (max 3 conns); forwarding to 3 ({tcp 127.0.0.1:45311})
I170607 07:20:54.002530 40857 gossip/client.go:136  [n9] closing client to node 1 (127.0.0.1:42531): received forward from node 1 to 3 (127.0.0.1:45311)
I170607 07:20:54.013651 39662 gossip/gossip.go:297  [n9] NodeDescriptor set to node_id:9 address:<network_field:"tcp" address_field:"127.0.0.1:40665" > attrs:<> locality:<> 
I170607 07:20:54.015688 40682 gossip/gossip.go:1210  [n9] node has connected to cluster via gossip
I170607 07:20:54.016427 40935 gossip/client.go:131  [n9] started gossip client to 127.0.0.1:45311
I170607 07:20:54.016993 40671 gossip/server.go:285  [n3] refusing gossip from node 9 (max 3 conns); forwarding to 6 ({tcp 127.0.0.1:33397})
I170607 07:20:54.049760 40935 gossip/client.go:136  [n9] closing client to node 3 (127.0.0.1:45311): received forward from node 3 to 6 (127.0.0.1:33397)
I170607 07:20:54.050913 41028 gossip/client.go:131  [n9] started gossip client to 127.0.0.1:33397
W170607 07:20:54.051017 39662 gossip/gossip.go:1196  [n10] no incoming or outgoing connections
I170607 07:20:54.063431 41013 gossip/client.go:131  [n10] started gossip client to 127.0.0.1:42531
I170607 07:20:54.065104 41017 gossip/server.go:285  [n1] refusing gossip from node 10 (max 3 conns); forwarding to 4 ({tcp 127.0.0.1:44690})
I170607 07:20:54.072748 39662 storage/store.go:1269  [n10,s10]: failed initial metrics computation: [n10,s10]: system config not yet available
I170607 07:20:54.073039 39662 gossip/gossip.go:297  [n10] NodeDescriptor set to node_id:10 address:<network_field:"tcp" address_field:"127.0.0.1:59484" > attrs:<> locality:<> 
I170607 07:20:54.081298 41013 gossip/client.go:136  [n10] closing client to node 1 (127.0.0.1:42531): received forward from node 1 to 4 (127.0.0.1:44690)
I170607 07:20:54.082887 41045 gossip/gossip.go:1210  [n10] node has connected to cluster via gossip
I170607 07:20:54.096092 41128 gossip/client.go:131  [n10] started gossip client to 127.0.0.1:44690
I170607 07:20:54.155509 41202 storage/replica_raftstorage.go:442  [s1,r1/1:/M{in-ax}] generated preemptive snapshot 9d199885 at index 24
I170607 07:20:54.213826 41202 storage/store.go:3371  [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 42, log entries: 14, rate-limit: 2.0 MiB/sec, 11ms
I170607 07:20:54.216678 41221 storage/replica_raftstorage.go:639  [s2,r1/?:{-}] applying preemptive snapshot at index 24 (id=9d199885, encoded size=10002, 1 rocksdb batches, 14 log entries)
I170607 07:20:54.248462 41221 storage/replica_raftstorage.go:647  [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 32ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I170607 07:20:54.251095 41202 storage/replica_command.go:3599  [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2]
I170607 07:20:54.262873 41241 storage/replica.go:2865  [s1,r1/1:/M{in-ax}] proposing ADD_REPLICA (n2,s2):2: [(n1,s1):1 (n2,s2):2]
I170607 07:20:54.332837 41288 storage/replica_raftstorage.go:442  [s1,r1/1:/M{in-ax}] generated preemptive snapshot af1b4c87 at index 27
I170607 07:20:54.340807 41288 storage/store.go:3371  [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 46, log entries: 17, rate-limit: 2.0 MiB/sec, 6ms
I170607 07:20:54.352764 41246 storage/replica_raftstorage.go:639  [s3,r1/?:{-}] applying preemptive snapshot at index 27 (id=af1b4c87, encoded size=12059, 1 rocksdb batches, 17 log entries)
I170607 07:20:54.355019 41246 storage/replica_raftstorage.go:647  [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I170607 07:20:54.357539 41288 storage/replica_command.go:3599  [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3]
W170607 07:20:54.397151 39768 vendor/github.com/coreos/etcd/raft/raft.go:793  [s1,r1/1:/M{in-ax}] 1 stepped down to follower since quorum is not active
I170607 07:20:54.397267 41334 storage/raft_transport.go:436  raft transport stream to node 1 established
I170607 07:20:54.429741 41277 util/stop/stopper.go:505  quiescing; tasks left:
1      storage/client_test.go:514
W170607 07:20:54.430131 41303 storage/replica.go:2529  [hb,s1,r1/1:/M{in-ax}] shutdown cancellation after 0.0s of attempting command [txn: e14ffaf8], BeginTransaction [/System/NodeLiveness/5,/Min), ConditionalPut [/System/NodeLiveness/5,/Min), EndTransaction [/System/NodeLiveness/5,/Min)
I170607 07:20:54.430944 40352 storage/node_liveness.go:352  [hb] heartbeat result is ambiguous (server shutdown); retrying
W170607 07:20:54.431497 40207 storage/node_liveness.go:253  [hb] failed node liveness heartbeat: node unavailable; try another peer
W170607 07:20:54.432170 40486 storage/node_liveness.go:253  [hb] failed node liveness heartbeat: node unavailable; try another peer
W170607 07:20:54.432786 40352 storage/node_liveness.go:253  [hb] failed node liveness heartbeat: node unavailable; try another peer
I170607 07:20:54.671039 40850 vendor/google.golang.org/grpc/transport/http2_server.go:392  transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:40665->127.0.0.1:42038: use of closed network connection
I170607 07:20:54.674594 40091 vendor/google.golang.org/grpc/transport/http2_server.go:392  transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:32921->127.0.0.1:41718: use of closed network connection
I170607 07:20:54.675452 40582 vendor/google.golang.org/grpc/transport/http2_server.go:392  transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:38514->127.0.0.1:34498: use of closed network connection
I170607 07:20:54.675821 40094 vendor/google.golang.org/grpc/transport/http2_client.go:1283  transport: http2Client.notifyError got notified that the client transport was broken EOF.
W170607 07:20:54.680040 40126 gossip/gossip.go:1196  [n8] no incoming or outgoing connections
I170607 07:20:54.681228 40693 vendor/google.golang.org/grpc/transport/http2_client.go:1283  transport: http2Client.notifyError got notified that the client transport was broken EOF.
I170607 07:20:54.684733 39695 vendor/google.golang.org/grpc/transport/http2_server.go:392  transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:42531->127.0.0.1:42971: use of closed network connection
I170607 07:20:54.684964 41351 vendor/google.golang.org/grpc/transport/http2_client.go:1283  transport: http2Client.notifyError got notified that the client transport was broken read tcp 127.0.0.1:41727->127.0.0.1:32921: read: connection reset by peer.
I170607 07:20:54.691009 40097 vendor/google.golang.org/grpc/clientconn.go:855  grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:32921: getsockopt: connection refused"; Reconnecting to {127.0.0.1:32921 <nil>}
I170607 07:20:54.691312 40516 vendor/google.golang.org/grpc/transport/http2_client.go:1283  transport: http2Client.notifyError got notified that the client transport was broken EOF.
I170607 07:20:54.699525 39862 vendor/google.golang.org/grpc/transport/http2_server.go:392  transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:43377->127.0.0.1:59844: use of closed network connection
I170607 07:20:54.699728 40817 vendor/google.golang.org/grpc/transport/http2_server.go:392  transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:59484->127.0.0.1:37273: use of closed network connection
I170607 07:20:54.700000 40366 vendor/google.golang.org/grpc/transport/http2_server.go:392  transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:33397->127.0.0.1:59348: use of closed network connection
I170607 07:20:54.700131 40231 vendor/google.golang.org/grpc/transport/http2_client.go:1283  transport: http2Client.notifyError got notified that the client transport was broken EOF.
W170607 07:20:54.700232 41272 storage/raft_transport.go:442  raft transport stream to node 2 failed: EOF
I170607 07:20:54.700403 39837 vendor/google.golang.org/grpc/transport/http2_server.go:392  transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:45311->127.0.0.1:39836: use of closed network connection
I170607 07:20:54.700546 40114 vendor/google.golang.org/grpc/transport/http2_server.go:392  transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:44690->127.0.0.1:46009: use of closed network connection
I170607 07:20:54.700710 39822 vendor/google.golang.org/grpc/transport/http2_client.go:1283  transport: http2Client.notifyError got notified that the client transport was broken EOF.
I170607 07:20:54.700805 40241 vendor/google.golang.org/grpc/transport/http2_server.go:392  transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:36253->127.0.0.1:45450: use of closed network connection
I170607 07:20:54.710170 39965 vendor/google.golang.org/grpc/transport/http2_client.go:1283  transport: http2Client.notifyError got notified that the client transport was broken EOF.
I170607 07:20:54.710423 39865 vendor/google.golang.org/grpc/transport/http2_client.go:1283  transport: http2Client.notifyError got notified that the client transport was broken EOF.
I170607 07:20:54.712343 39825 vendor/google.golang.org/grpc/clientconn.go:855  grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:44690: getsockopt: connection refused"; Reconnecting to {127.0.0.1:44690 <nil>}
I170607 07:20:54.713178 39868 vendor/google.golang.org/grpc/clientconn.go:855  grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:43377: getsockopt: connection refused"; Reconnecting to {127.0.0.1:43377 <nil>}
I170607 07:20:54.714035 40519 vendor/google.golang.org/grpc/clientconn.go:855  grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:36253: getsockopt: connection refused"; Reconnecting to {127.0.0.1:36253 <nil>}
I170607 07:20:54.715186 39968 vendor/google.golang.org/grpc/clientconn.go:855  grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:45311: getsockopt: connection refused"; Reconnecting to {127.0.0.1:45311 <nil>}
I170607 07:20:54.715723 40234 vendor/google.golang.org/grpc/clientconn.go:855  grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:33397: operation was canceled"; Reconnecting to {127.0.0.1:33397 <nil>}
	client_test.go:1031: change replicas of r1 failed: quota pool no longer in use
@cockroach-teamcity cockroach-teamcity added O-robot Originated from a bot. C-test-failure Broken test (automatically or manually discovered). labels Jun 7, 2017
irfansharif added a commit to irfansharif/cockroach that referenced this issue Jun 7, 2017
Fixes cockroachdb#16376.

Under stress TestRaftRemoveRace failed with:

  --- FAIL: TestRaftRemoveRace (1.45s)
  client_test.go:1031: change replicas of r1 failed: quota pool no
  longer in use

Consider the following:
- 'add replica' commands get queued up on the replicate queue
- leader replica steps down as leader due to timeouts thus closing the
  quota pool
- commands get out of the queue, cannot acquire quota because the quota
  pool is closed and fail with an error indicating so

TestRaftRemoveRace fails as it expects all replica additions to go
through without failure.

To reproduce this the following minimal test (run under stress) is
sufficient where we lower RaftTickInterval and RaftElectionTimeoutTicks
to make it more likely that leadership changes take place.

  func TestReplicateQueueQuota(t *testing.T) {
    defer leaktest.AfterTest(t)()
      sc := storage.TestStoreConfig(nil)
      sc.RaftElectionTimeoutTicks = 2             // Default: 15
      sc.RaftTickInterval = 10 * time.Millisecond // Default: 200ms
      mtc := &multiTestContext{storeConfig: &sc}
      defer mtc.Stop()
      mtc.Start(t, 3)

      const rangeID = roachpb.RangeID(1)
      mtc.replicateRange(rangeID, 1, 2)

      for i := 0; i < 10; i++ {
          mtc.unreplicateRange(rangeID, 2)
          mtc.replicateRange(rangeID, 2)
      }
  }

The earlier version of TestRaftRemoveRace was written to reproduce the
failure seen in cockroachdb#9037, this was agnostic to raft leadership changes.
@irfansharif irfansharif reopened this Jun 8, 2017
@irfansharif
Copy link
Contributor

irfansharif commented Jun 8, 2017

If I've understood correctly this touches the short window of time where it's
possible that the lease holder and the raft leader are not the same replica
(raft leadership changes from under us, but the lease holder is still the old
replica for a short while). An assumption baked into the quota pool was that
lease holder and the raft leader are co-located.

  1. The leaseholder and the raft leader are co-located
  2. 'add replica' commands get queued up on the replicate queue
  3. leader replica steps down as leader due to timeouts thus closing the quota
    pool (this happens on the leaseholder, because they're one and the same)
  4. commands get out of the queue, cannot acquire quota because the quota
    pool is closed (on the lease holder) and fail with an error indicating so

Re-opening while I think about this some more. ~~~At a first glance it
seems moving away from leadership change based pool initialization/destruction
to an entirely lease holder transition based one is more appropriate, would also
address some of the impedance mismatch jumping between the two.~~~~
Original reasoning to reset quota pool on leadership transitions:

// Raft may propose commands itself (specifically the empty commands when
// leadership changes), and these commands don't go through the code paths where
// we acquire quota from the pool. To offset this we reset the quota pool whenever
// leadership changes hands.

@irfansharif
Copy link
Contributor

From #16399:

We require that raft commands are proposed on the lease holder, not the raft leader (the lease holder will forward the proposal to the leader when this occurs). That's what was happening in the test (the ChangeReplicas transaction was always sent to the lease holder). I think this test was uncovering an actual bug in the quota pool.

@bdarnell
Copy link
Contributor

bdarnell commented Jun 8, 2017

If I've understood correctly this touches the short window of time where it's
possible that the lease holder and the raft leader are not the same replica
(raft leadership changes from under us, but the lease holder is still the old
replica for a short while). An assumption baked into the quota pool was that
lease holder and the raft leader are co-located.

Yes. As I said when the quota pool was introduced (#15802 (comment)), it's fine if the quota pool mechanism is effectively disabled when the leader and lease holder are not the same. We want to "fail open" in this case instead of returning an error.

irfansharif added a commit to irfansharif/cockroach that referenced this issue Jun 8, 2017
(actually) Fixes cockroachdb#16376, reverts cockroachdb#16399.

TestRaftRemoveRace touched the short window of time where it was
possible that the lease holder and the raft leader were not the same
replica (raft leadership could change from under us, but the lease
holder stayed steady).

Consider the following sequence of events:
- the lease holder and the raft leader are co-located
- 'add replica' commands get queued up on the replicate queue
- leader replica steps down as leader thus closing the quota pool (on
  the leaseholder, because they're one and the same)
- commands get out of the queue, cannot acquire quota because the quota pool is
  closed (on the lease holder) and fail with an error indicating so

We make two observations:
- quotaPool.close() only takes place when a raft leader is becoming a
follower and thus causing all ongoing acquisitions to fail
- Ongoing acquisitions are only taking place on the lease holder replica

The quota pool was implemented in a manner such that it is effectively
disabled when the lease holder and the range leader are not co-located.
Failing with an error here (now that the raft leader has changed, the
lease holder and raft leader are no longer co-located) runs contrary to this.
What we really want is to "fail open" in this case instead, i.e. allow the
acquisition to proceed as if the quota pool is effectively disabled.
irfansharif added a commit to irfansharif/cockroach that referenced this issue Jun 9, 2017
(actually) Fixes cockroachdb#16376, reverts cockroachdb#16399.

TestRaftRemoveRace touched the short window of time where it was
possible that the lease holder and the raft leader were not the same
replica (raft leadership could change from under us, but the lease
holder stayed steady).

Consider the following sequence of events:
- the lease holder and the raft leader are co-located
- 'add replica' commands get queued up on the replicate queue
- leader replica steps down as leader thus closing the quota pool (on
  the leaseholder, because they're one and the same)
- commands get out of the queue, cannot acquire quota because the quota pool is
  closed (on the lease holder) and fail with an error indicating so

We make two observations:
- quotaPool.close() only takes place when a raft leader is becoming a
follower and thus causing all ongoing acquisitions to fail
- Ongoing acquisitions are only taking place on the lease holder replica

The quota pool was implemented in a manner such that it is effectively
disabled when the lease holder and the range leader are not co-located.
Failing with an error here (now that the raft leader has changed, the
lease holder and raft leader are no longer co-located) runs contrary to this.
What we really want is to "fail open" in this case instead, i.e. allow the
acquisition to proceed as if the quota pool is effectively disabled.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot.
Projects
None yet
3 participants