Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
100598: upgrade/upgrades: fix TestUpgradeSchemaChangerElements race condition r=fqazi a=fqazi

Previously, TestUpgradeSchemaChangerElements started a schema change
and intentionally caused it to be paused. It then resumed this job
and started a version upgrade. The version upgrade being tested is
meant to pause schema change jobs intentionally, which could lead
to the job getting paused again before it resumed. This patch, adds
one more channel to ensure synchronization in this sequence.

Additionally, we will further limit the scope of version to make
this test a bit shorter, so that we are only focused on the upgrade
for schema changes

Fixes: #98602

Release note: None

100675: importer: fix flaky TestImportIntoCSV r=yuzefovich a=yuzefovich

This commit fixes an oversight of 117f712 that made TestImportIntoCSV flaky. In particular, that commit made it so that if the import fails but is actually paused, the corresponding import job is resumed. This is needed in order to bring the table back online before it can be safely dropped. However, resuming of the job might fail if the job still has the "pause-requested" status, thus, this commit adds a waiting mechanism before the job transitions out of this status. Additionally, that resume can hang (or take a very long time), so this commit switches to canceling the job instead.

Fixes: #100366.
Fixes: #100477.

Release note: None

100761: flowinfra: fix a rare bug that could make drain be stuck forever r=yuzefovich a=yuzefovich

This commit fixes a long-standing bug around the flow registry that
could make the drain loop be stuck forever. Drain process works by
draining several components in a loop until each component reports that
there was no more remaining work when the drain iteration was initiated.
One of the components to be drained is the flow registry: namely, we
want to make sure that there are no more remote flows present on the
node. We track that by having a map from the `FlowID` to the `flowEntry`
object.

Previously, it was possible for a `flowEntry` to become "stale" and
remain in the map forever. In particular, this was the case when
- `ConnectInboundStream` was called before the flow was scheduled
- the gRPC "handshake" failed in `ConnectInboundStream` (most likely due
to a network fluke)
- the flow never arrived (perhaps it was canceled before
`Flow.StartInternal` is called), or it arrived too late when the
registry was marked as "draining".

With such a scenario we would create a `flowEntry` with ref count of
zero and add it to the map in `ConnectInboundStream`, but no one would
ever remove it. This commit fixes this oversight by adjusting the ref
counting logic a bit so that we always hold a reference throughout (and
only until the end of) `ConnectInboundStream`.

Fixes: #100710.

Release note (bug fix): A rare bug with distributed plans shutdown has
been fixed that previously could make the graceful drain of cockroach
nodes be retrying forever. The bug has been present since before 22.1.
The drain process is affected by this bug if you see messages in the
logs like `drain details: distSQL execution flows:` with non-zero number
of flows that isn't going down over long period of time.

100768: build: append `-dirty` suffix if relevant r=rail a=rickystewart

This command and the `rev-parse HEAD` command behave identically except if the workspace is dirty, this command adds a `-dirty` suffix.

Epic: none
Release note: None

Co-authored-by: Faizan Qazi <[email protected]>
Co-authored-by: Yahor Yuzefovich <[email protected]>
Co-authored-by: Ricky Stewart <[email protected]>
  • Loading branch information
4 people committed Apr 6, 2023
5 parents 659b90b + b8ffd10 + edfe741 + 8d939df + 0f5d28f commit b02b1b5
Show file tree
Hide file tree
Showing 7 changed files with 107 additions and 34 deletions.
2 changes: 1 addition & 1 deletion build/bazelutil/stamp.sh
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ else
CRASH_REPORT_ENV="development"
fi

BUILD_REV=$(git rev-parse HEAD)
BUILD_REV=$(git describe --match="" --always --abbrev=40 --dirty)
BUILD_UTCTIME=$(date -u '+%Y/%m/%d %H:%M:%S')


Expand Down
1 change: 1 addition & 0 deletions pkg/sql/flowinfra/BUILD.bazel
Original file line number Diff line number Diff line change
Expand Up @@ -120,6 +120,7 @@ go_test(
"//pkg/util/timeutil",
"//pkg/util/uuid",
"@com_github_cockroachdb_errors//:errors",
"@com_github_cockroachdb_redact//:redact",
"@com_github_stretchr_testify//require",
],
)
Expand Down
26 changes: 14 additions & 12 deletions pkg/sql/flowinfra/flow_registry.go
Original file line number Diff line number Diff line change
Expand Up @@ -401,7 +401,6 @@ func (fr *FlowRegistry) waitForFlow(
waitCh = make(chan struct{})
entry.waitCh = waitCh
}
entry.refCount++
fr.Unlock()

select {
Expand All @@ -411,7 +410,6 @@ func (fr *FlowRegistry) waitForFlow(
}

fr.Lock()
fr.releaseEntryLocked(id)
return entry.flow
}

Expand All @@ -421,7 +419,7 @@ func (fr *FlowRegistry) waitForFlow(
// expectedConnectionTime so that any flows that were registered at the end of
// the time window have a reasonable amount of time to connect to their
// consumers, thus unblocking them. All flows that are still running at this
// point are canceled if cancelStillRunning is true.
// point are canceled.
//
// The FlowRegistry rejects any new flows once it has finished draining.
//
Expand Down Expand Up @@ -524,8 +522,8 @@ func (fr *FlowRegistry) Drain(
allFlowsDone <- struct{}{}
}

// Undrain causes the FlowRegistry to start accepting flows again.
func (fr *FlowRegistry) Undrain() {
// undrain causes the FlowRegistry to start accepting flows again.
func (fr *FlowRegistry) undrain() {
fr.Lock()
fr.draining = false
fr.Unlock()
Expand Down Expand Up @@ -556,7 +554,18 @@ func (fr *FlowRegistry) ConnectInboundStream(
fr.Lock()
entry := fr.getEntryLocked(flowID)
flow := entry.flow
// Take a reference that is always released at the end of this method. In
// the happy case (when we end up returning a flow that we connected to),
// that flow also took reference in RegisterFlow, so the ref count won't go
// below one; in all other cases we want to make sure to delete the entry
// from the registry if we're holding the only reference.
entry.refCount++
fr.Unlock()
defer func() {
fr.Lock()
defer fr.Unlock()
fr.releaseEntryLocked(flowID)
}()
if flow == nil {
// Send the handshake message informing the producer that the consumer has
// not been scheduled yet. Another handshake will be sent below once the
Expand All @@ -570,13 +579,6 @@ func (fr *FlowRegistry) ConnectInboundStream(
MinAcceptedVersion: execinfra.MinAcceptedVersion,
},
}); err != nil {
// TODO(andrei): We failed to send a message to the producer; we'll return
// an error and leave this stream with connected == false so it times out
// later. We could call finishInboundStreamLocked() now so that the flow
// doesn't wait for the timeout and we could remember the error for the
// consumer if the consumer comes later, but I'm not sure what the best
// way to do that is. Similarly for the 2nd handshake message below,
// except there we already have the consumer and we can push the error.
return nil, nil, nil, err
}
flow = fr.waitForFlow(ctx, flowID, timeout)
Expand Down
58 changes: 53 additions & 5 deletions pkg/sql/flowinfra/flow_registry_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -26,9 +26,11 @@ import (
"github.com/cockroachdb/cockroach/pkg/testutils/distsqlutils"
"github.com/cockroachdb/cockroach/pkg/util/cancelchecker"
"github.com/cockroachdb/cockroach/pkg/util/leaktest"
"github.com/cockroachdb/cockroach/pkg/util/log"
"github.com/cockroachdb/cockroach/pkg/util/timeutil"
"github.com/cockroachdb/cockroach/pkg/util/uuid"
"github.com/cockroachdb/errors"
"github.com/cockroachdb/redact"
)

// lookupFlow returns the registered flow with the given ID. If no such flow is
Expand Down Expand Up @@ -367,6 +369,7 @@ func TestHandshake(t *testing.T) {
// subtests for more details.
func TestFlowRegistryDrain(t *testing.T) {
defer leaktest.AfterTest(t)()
defer log.Scope(t).Close(t)

ctx := context.Background()
reg := NewFlowRegistry()
Expand Down Expand Up @@ -396,15 +399,15 @@ func TestFlowRegistryDrain(t *testing.T) {
time.Sleep(time.Microsecond)
reg.UnregisterFlow(id)
<-drainDone
reg.Undrain()
reg.undrain()
})

// DrainTimeout verifies that Drain returns once the timeout expires.
t.Run("DrainTimeout", func(t *testing.T) {
registerFlow(t, id)
reg.Drain(0 /* flowDrainWait */, 0 /* minFlowDrainWait */, nil /* reporter */)
reg.UnregisterFlow(id)
reg.Undrain()
reg.undrain()
})

// AcceptNewFlow verifies that a FlowRegistry continues accepting flows
Expand Down Expand Up @@ -434,7 +437,7 @@ func TestFlowRegistryDrain(t *testing.T) {
); !testutils.IsError(err, "draining") {
t.Fatalf("unexpected error: %v", err)
}
reg.Undrain()
reg.undrain()
})

// MinFlowWait verifies that the FlowRegistry waits a minimum amount of time
Expand Down Expand Up @@ -464,7 +467,7 @@ func TestFlowRegistryDrain(t *testing.T) {
}
reg.UnregisterFlow(id)
<-drainDone
reg.Undrain()
reg.undrain()

// Case in which a running flow finishes before the minimum wait time. We
// attempt to register another flow after the completion of the first flow
Expand Down Expand Up @@ -504,7 +507,52 @@ func TestFlowRegistryDrain(t *testing.T) {
reg.UnregisterFlow(id)
<-drainDone

reg.Undrain()
reg.undrain()
})

// StaleFlowEntry verifies that the Drain doesn't get stuck forever due to a
// stale flowEntry in the registry that was created by ConnectInboundStream
// which failed its initial handshake. If such entry remains in the
// registry, then the Drain process will be stuck forever (#100710).
t.Run("StaleFlowEntry", func(t *testing.T) {
serverStream, _ /* clientStream */, cleanup := createDummyStream(t)
defer cleanup()

// Create a gRPC stream where we inject an error on each Send call.
//
// We don't need to delay the RPC call.
delayCh := make(chan struct{})
close(delayCh)
injectedErr := errors.New("injected error")
serverStream = &delayedErrorServerStream{
DistSQL_FlowStreamServer: serverStream,
// Make rpcCalledCh a buffered channel so that the RPC is not
// blocked.
rpcCalledCh: make(chan struct{}, 1),
delayCh: delayCh,
err: injectedErr,
}

// Attempt to connect an inbound stream for which there is no flow in
// the registry. In such case we attempt to send a handshake message
// which should fail with the injected error.
_, _, _, err := reg.ConnectInboundStream(ctx, id, execinfrapb.StreamID(0), serverStream, 0 /* timeout */)
if !errors.Is(err, injectedErr) {
t.Fatalf("unexpected error: %v", err)
}

// Now, the crux of the test: attempt to drain the registry and verify
// that there were no flows to drain. If we do see some, then it means
// that we have a stale flowEntry that will never get cleaned up.
var remaining int
reporter := func(r int, _ redact.SafeString) {
remaining += r
}
reg.Drain(0 /* flowDrainWait */, 0 /* minFlowDrainWait */, reporter)
reg.undrain()
if remaining != 0 {
t.Fatalf("expected zero flows reported, found %d", remaining)
}
})
}

Expand Down
30 changes: 20 additions & 10 deletions pkg/sql/importer/import_stmt_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -3024,7 +3024,6 @@ func TestImportIntoCSV(t *testing.T) {
defer leaktest.AfterTest(t)()
defer log.Scope(t).Close(t)

skip.WithIssue(t, 100477, "programming error in dropTableAfterJobComplete below")
skip.UnderShort(t)
skip.UnderRace(t, "takes >1min under race")

Expand Down Expand Up @@ -3102,15 +3101,14 @@ func TestImportIntoCSV(t *testing.T) {
testNum := -1
insertedRows := numFiles * rowsPerFile

// Some of the tests result in a failing import. in this case,
// the table won't be able to drop until IMPORT's
// OnFailOrCancel brings the table back online.
// Some of the tests result in a failing import. In this case, the table
// won't be able to drop until IMPORT's OnFailOrCancel brings the table back
// online.
//
// Depening on which node that the import started on,
// we may have paused rather than failed. This is
// because _all_ errors that "rpc error" are
// retriable. If we end up with a cross-node nodelocal
// request, we get a pause.
// Depending on which node that the import started on, we may have paused
// rather than failed. This is because _all_ errors that "rpc error" are
// retriable. If we end up with a cross-node nodelocal request, we get a
// pause.
dropTableAfterJobComplete := func(t *testing.T, tableName string) {
var jobID int
row := conn.QueryRow("SELECT job_id FROM [SHOW JOBS] WHERE job_type = 'IMPORT' AND status IN ('paused', 'pause-requested', 'reverting')")
Expand All @@ -3119,7 +3117,19 @@ func TestImportIntoCSV(t *testing.T) {
t.Fatal(err)
}
if jobID != 0 {
sqlDB.Exec(t, "RESUME JOB $1", jobID)
// If the job has the "pause-requested" status, we must block until
// it transitions to the "paused" status (because we cannot cancel
// the job otherwise).
testutils.SucceedsSoon(t, func() error {
r := sqlDB.QueryRow(t, "SELECT status FROM [SHOW JOBS] WHERE job_id = $1", jobID)
var status string
r.Scan(&status)
if status == string(jobs.StatusPauseRequested) {
return errors.New("still has pause-requested status")
}
return nil
})
sqlDB.Exec(t, "CANCEL JOB $1", jobID)
sqlDB.Exec(t, "SHOW JOB WHEN COMPLETE $1", jobID)
}
sqlDB.Exec(t, fmt.Sprintf("DROP TABLE %s", tableName))
Expand Down
2 changes: 1 addition & 1 deletion pkg/sql/pgwire/server.go
Original file line number Diff line number Diff line change
Expand Up @@ -517,7 +517,7 @@ func (s *Server) waitConnsDone() (cancelChanMap, chan struct{}, chan struct{}) {
connCancelMap := func() cancelChanMap {
s.mu.Lock()
defer s.mu.Unlock()
connCancelMap := make(cancelChanMap)
connCancelMap := make(cancelChanMap, len(s.mu.connCancelMap))
for done, cancel := range s.mu.connCancelMap {
connCancelMap[done] = cancel
}
Expand Down
22 changes: 17 additions & 5 deletions pkg/upgrade/upgrades/schemachanger_elements_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,6 @@ import (
"github.com/cockroachdb/cockroach/pkg/sql/tests"
"github.com/cockroachdb/cockroach/pkg/testutils"
"github.com/cockroachdb/cockroach/pkg/testutils/serverutils"
"github.com/cockroachdb/cockroach/pkg/testutils/skip"
"github.com/cockroachdb/cockroach/pkg/util"
"github.com/cockroachdb/cockroach/pkg/util/ctxgroup"
"github.com/cockroachdb/cockroach/pkg/util/leaktest"
Expand All @@ -37,7 +36,6 @@ import (

func TestUpgradeSchemaChangerElements(t *testing.T) {
defer leaktest.AfterTest(t)()
skip.WithIssue(t, 98062, "flaky test")
defer log.Scope(t).Close(t)

ctx := context.Background()
Expand Down Expand Up @@ -120,6 +118,7 @@ func TestUpgradeSchemaChangerElements(t *testing.T) {
schemaChangeAllowedToComplete chan struct{}
waitedForJob chan struct{}
jobIsPaused chan struct{}
jobStarted chan struct{}
readyToQuery int64 = 0
)
scJobID := int64(jobspb.InvalidJobID)
Expand All @@ -131,6 +130,9 @@ func TestUpgradeSchemaChangerElements(t *testing.T) {
if p.Stages[stageIdx].Phase == scop.PreCommitPhase {
atomic.StoreInt64(&scJobID, int64(p.JobID))
} else if p.Stages[stageIdx].Phase > scop.PreCommitPhase {
if tc.unpauseJob {
jobStarted <- struct{}{}
}
<-waitedForJob
waitedForJob = nil
schemaChangeAllowedToComplete <- struct{}{}
Expand All @@ -156,19 +158,23 @@ func TestUpgradeSchemaChangerElements(t *testing.T) {
params.Knobs.JobsTestingKnobs = jobKnobs
s, sqlDB, _ := serverutils.StartServer(t, params)
defer s.Stopper().Stop(ctx)
_, err := sqlDB.Exec("CREATE TABLE t (pk INT PRIMARY KEY, b INT)")
// Make sure our cluster is up
_, err := sqlDB.Exec("SET CLUSTER SETTING version=$1",
clusterversion.ByKey(clusterversion.V23_1_SchemaChangerDeprecatedIndexPredicates-1).String())
require.NoError(t, err)
_, err = sqlDB.Exec("CREATE TABLE t (pk INT PRIMARY KEY, b INT)")
require.NoError(t, err)
_, err = sqlDB.Exec("CREATE UNIQUE INDEX ON t (b) WHERE pk > 0")
require.NoError(t, err)
_, err = sqlDB.Exec("SET CLUSTER SETTING jobs.debug.pausepoints='newschemachanger.before.exec'")
require.NoError(t, err)

ctx := context.Background()
g := ctxgroup.WithContext(ctx)

schemaChangeAllowedToComplete = make(chan struct{})
waitedForJob = make(chan struct{})
jobIsPaused = make(chan struct{})
jobStarted = make(chan struct{})
g.GoCtx(func(ctx context.Context) error {
err := tc.run(t, sqlDB)
jobIsPaused <- struct{}{}
Expand All @@ -183,8 +189,14 @@ func TestUpgradeSchemaChangerElements(t *testing.T) {
require.NoError(t, err)
_, err = sqlDB.Exec("RESUME JOB $1", jobspb.JobID(atomic.LoadInt64(&scJobID)))
require.NoError(t, err)
// Wait for the job to start after the unpause,
// we will let it get stuck during execution next.
// We don't want a race condition with the migration
// job that will pause it again.
<-jobStarted
}
_, err = sqlDB.Exec("SET CLUSTER SETTING version = crdb_internal.node_executable_version()")
_, err = sqlDB.Exec("SET CLUSTER SETTING version = $1",
clusterversion.ByKey(clusterversion.V23_1_SchemaChangerDeprecatedIndexPredicates).String())
return err
})

Expand Down

0 comments on commit b02b1b5

Please sign in to comment.