Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sentry: table.go:203: attempted to update job for mutation 2, but job already exists with mutation 1 (1) Wraps: (2) assertion failure Wraps: (3) attached stack trace -- stack trace: | github.com/cockroach... #93356

Closed
cockroach-teamcity opened this issue Dec 9, 2022 · 1 comment
Labels
C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. O-sentry Originated from an in-the-wild panic report.

Comments

@cockroach-teamcity
Copy link
Member

cockroach-teamcity commented Dec 9, 2022

This issue was autofiled by Sentry. It represents a crash or reported error on a live cluster with telemetry enabled.

Sentry link: https://sentry.io/organizations/cockroach-labs/issues/3798840708/?referrer=webhooks_plugin

Panic message:

table.go:203: attempted to update job for mutation 2, but job already exists with mutation 1
(1)
Wraps: (2) assertion failure
Wraps: (3) attached stack trace
-- stack trace:
| github.com/cockroachdb/cockroach/pkg/sql.(*planner).createOrUpdateSchemaChangeJob
| github.com/cockroachdb/cockroach/pkg/sql/table.go:203
| github.com/cockroachdb/cockroach/pkg/sql.(*planner).writeSchemaChange
| github.com/cockroachdb/cockroach/pkg/sql/table.go:251
| github.com/cockroachdb/cockroach/pkg/sql.(*createIndexNode).startExec
| github.com/cockroachdb/cockroach/pkg/sql/create_index.go:812
| github.com/cockroachdb/cockroach/pkg/sql.startExec.func2
| github.com/cockroachdb/cockroach/pkg/sql/plan.go:518
| github.com/cockroachdb/cockroach/pkg/sql.(*planVisitor).visitInternal.func1
| github.com/cockroachdb/cockroach/pkg/sql/walk.go:112
| github.com/cockroachdb/cockroach/pkg/sql.(*planVisitor).visitInternal
| github.com/cockroachdb/cockroach/pkg/sql/walk.go:297
| github.com/cockroachdb/cockroach/pkg/sql.(*planVisitor).visit
| github.com/cockroachdb/cockroach/pkg/sql/walk.go:79
| github.com/cockroachdb/cockroach/pkg/sql.walkPlan
| github.com/cockroachdb/cockroach/pkg/sql/walk.go:43
| github.com/cockroachdb/cockroach/pkg/sql.startExec
| github.com/cockroachdb/cockroach/pkg/sql/plan.go:521
| github.com/cockroachdb/cockroach/pkg/sql.(*planNodeToRowSource).Start
| github.com/cockroachdb/cockroach/pkg/sql/plan_node_to_row_source.go:147
| github.com/cockroachdb/cockroach/pkg/sql/colexec.(*Columnarizer).Init
| github.com/cockroachdb/cockroach/pkg/sql/colexec/columnarizer.go:178
| github.com/cockroachdb/cockroach/pkg/sql/colflow.(*batchInfoCollector).Init
| github.com/cockroachdb/cockroach/pkg/sql/colflow/stats.go:90
| github.com/cockroachdb/cockroach/pkg/sql/colflow.(*BatchFlowCoordinator).init.func1
| github.com/cockroachdb/cockroach/pkg/sql/colflow/flow_coordinator.go:247
| github.com/cockroachdb/cockroach/pkg/sql/colexecerror.CatchVectorizedRuntimeError
| github.com/cockroachdb/cockroach/pkg/sql/colexecerror/error.go:92
| github.com/cockroachdb/cockroach/pkg/sql/colflow.(*BatchFlowCoordinator).init
| github.com/cockroachdb/cockroach/pkg/sql/colflow/flow_coordinator.go:246
| github.com/cockroachdb/cockroach/pkg/sql/colflow.(*BatchFlowCoordinator).Run
| github.com/cockroachdb/cockroach/pkg/sql/colflow/flow_coordinator.go:291
| github.com/cockroachdb/cockroach/pkg/sql/colflow.(*vectorizedFlow).Run
| github.com/cockroachdb/cockroach/pkg/sql/colflow/vectorized_flow.go:320
| github.com/cockroachdb/cockroach/pkg/sql.(*DistSQLPlanner).Run
| github.com/cockroachdb/cockroach/pkg/sql/distsql_running.go:695
| github.com/cockroachdb/cockroach/pkg/sql.(*DistSQLPlanner).PlanAndRun
| github.com/cockroachdb/cockroach/pkg/sql/distsql_running.go:1611
| github.com/cockroachdb/cockroach/pkg/sql.(*DistSQLPlanner).PlanAndRunAll
| github.com/cockroachdb/cockroach/pkg/sql/distsql_running.go:1334
| github.com/cockroachdb/cockroach/pkg/sql.(*connExecutor).execWithDistSQLEngine
| github.com/cockroachdb/cockroach/pkg/sql/conn_executor_exec.go:1541
| github.com/cockroachdb/cockroach/pkg/sql.(*connExecutor).dispatchToExecutionEngine
| github.com/cockroachdb/cockroach/pkg/sql/conn_executor_exec.go:1177
| github.com/cockroachdb/cockroach/pkg/sql.(*connExecutor).execStmtInOpenState
| github.com/cockroachdb/cockroach/pkg/sql/conn_executor_exec.go:687
| github.com/cockroachdb/cockroach/pkg/sql.(*connExecutor).execStmt.func1
| github.com/cockroachdb/cockroach/pkg/sql/conn_executor_exec.go:129
| github.com/cockroachdb/cockroach/pkg/sql.(*connExecutor).execWithProfiling
| github.com/cockroachdb/cockroach/pkg/sql/conn_executor_exec.go:2382
| github.com/cockroachdb/cockroach/pkg/sql.(*connExecutor).execStmt
| github.com/cockroachdb/cockroach/pkg/sql/conn_executor_exec.go:128
| github.com/cockroachdb/cockroach/pkg/sql.(*connExecutor).execPortal
| github.com/cockroachdb/cockroach/pkg/sql/conn_executor_exec.go:218
| github.com/cockroachdb/cockroach/pkg/sql.(*connExecutor).execCmd.func2
| github.com/cockroachdb/cockroach/pkg/sql/conn_executor.go:1998
| github.com/cockroachdb/cockroach/pkg/sql.(*connExecutor).execCmd
| github.com/cockroachdb/cockroach/pkg/sql/conn_executor.go:2000
| github.com/cockroachdb/cockroach/pkg/sql.(*connExecutor).run
| github.com/cockroachdb/cockroach/pkg/sql/conn_executor.go:1846
| github.com/cockroachdb/cockroach/pkg/sql.(*Server).ServeConn
| github.com/cockroachdb/cockroach/pkg/sql/conn_executor.go:828
| github.com/cockroachdb/cockroach/pkg/sql/pgwire.(*conn).processCommandsAsync.func1
| github.com/cockroachdb/cockroach/pkg/sql/pgwire/conn.go:728
Wraps: (4) attempted to update job for mutation 2, but job already exists with mutation 1
Error types: (1) *colexecerror.notInternalError (2) *assert.withAssertionFailure (3) *withstack.withStack (4) *errutil.leafError
-- report composition:
*errutil.leafError: attempted to update job for mutation 2, but job already exists with mutation 1
table.go:203: *withstack.withStack (top exception)
*assert.withAssertionFailure
*colexecerror.notInternalError

Stacktrace (expand for inline code snippets):

cockroach/pkg/sql/table.go

Lines 202 to 204 in 77667a1

if mutationID != descpb.InvalidMutationID && mutationID != oldDetails.TableMutationID {
return errors.AssertionFailedf(
"attempted to update job for mutation %d, but job already exists with mutation %d",
in pkg/sql.(*planner).createOrUpdateSchemaChangeJob

cockroach/pkg/sql/table.go

Lines 250 to 252 in 77667a1

if !tableDesc.IsNew() {
if err := p.createOrUpdateSchemaChangeJob(ctx, tableDesc, jobDesc, mutationID); err != nil {
return err
in pkg/sql.(*planner).writeSchemaChange
mutationID := n.tableDesc.ClusterVersion().NextMutationID
if err := params.p.writeSchemaChange(
params.ctx, n.tableDesc, mutationID, tree.AsStringWithFQNames(n.n, params.Ann()),
in pkg/sql.(*createIndexNode).startExec

cockroach/pkg/sql/plan.go

Lines 517 to 519 in 77667a1

}
return n.startExec(params)
},
in pkg/sql.startExec.func2

cockroach/pkg/sql/walk.go

Lines 111 to 113 in 77667a1

}
v.err = v.observer.leaveNode(name, plan)
}()
in pkg/sql.(*planVisitor).visitInternal.func1

cockroach/pkg/sql/walk.go

Lines 296 to 298 in 77667a1

}
}
in pkg/sql.(*planVisitor).visitInternal
}
v.visitInternal(plan, name)
return plan
in pkg/sql.(*planVisitor).visit
v := makePlanVisitor(ctx, observer)
v.visit(plan)
return v.err
in pkg/sql.walkPlan

cockroach/pkg/sql/plan.go

Lines 520 to 522 in 77667a1

}
return walkPlan(params.ctx, plan, o)
}
in pkg/sql.startExec
// This starts all of the nodes below this node.
if err := startExec(p.params, p.node); err != nil {
p.MoveToDraining(err)
in pkg/sql.(*planNodeToRowSource).Start
ctx = c.StartInternalNoSpan(ctx)
c.input.Start(ctx)
if execStatsHijacker, ok := c.input.(execinfra.ExecStatsForTraceHijacker); ok {
in pkg/sql/colexec.(*Columnarizer).Init
func (bic *batchInfoCollector) Init(ctx context.Context) {
bic.Operator.Init(ctx)
bic.mu.Lock()
in pkg/sql/colflow.(*batchInfoCollector).Init
return colexecerror.CatchVectorizedRuntimeError(func() {
f.input.Root.Init(ctx)
})
in pkg/sql/colflow.(*BatchFlowCoordinator).init.func1
}()
operation()
return retErr
in pkg/sql/colexecerror.CatchVectorizedRuntimeError
func (f *BatchFlowCoordinator) init(ctx context.Context) error {
return colexecerror.CatchVectorizedRuntimeError(func() {
f.input.Root.Init(ctx)
in pkg/sql/colflow.(*BatchFlowCoordinator).init
if err := f.init(ctx); err != nil {
f.pushError(err)
in pkg/sql/colflow.(*BatchFlowCoordinator).Run
log.VEvent(ctx, 1, "running the batch flow coordinator in the flow's goroutine")
f.batchFlowCoordinator.Run(ctx)
}
in pkg/sql/colflow.(*vectorizedFlow).Run
// TODO(radu): this should go through the flow scheduler.
flow.Run(ctx, func() {})
in pkg/sql.(*DistSQLPlanner).Run
recv.expectedRowsRead = int64(physPlan.TotalEstimatedScannedRows)
runCleanup := dsp.Run(ctx, planCtx, txn, physPlan, recv, evalCtx, nil /* finishedSetupFn */)
return func() {
in pkg/sql.(*DistSQLPlanner).PlanAndRun
// the planner whether or not to plan remote table readers.
cleanup := dsp.PlanAndRun(
ctx, evalCtx, planCtx, planner.txn, planner.curPlan.main, recv,
in pkg/sql.(*DistSQLPlanner).PlanAndRunAll
}
err := ex.server.cfg.DistSQLPlanner.PlanAndRunAll(ctx, evalCtx, planCtx, planner, recv, evalCtxFactory)
return *recv.stats, err
in pkg/sql.(*connExecutor).execWithDistSQLEngine
ex.sessionTracing.TraceExecStart(ctx, "distributed")
stats, err = ex.execWithDistSQLEngine(
ctx, planner, stmt.AST.StatementReturnType(), res, distribute, progAtomic,
in pkg/sql.(*connExecutor).dispatchToExecutionEngine
if err := ex.dispatchToExecutionEngine(stmtCtx, p, res); err != nil {
stmtThresholdSpan.Finish()
in pkg/sql.(*connExecutor).execStmtInOpenState
err = ex.execWithProfiling(ctx, ast, prepared, func(ctx context.Context) error {
ev, payload, err = ex.execStmtInOpenState(ctx, parserStmt, prepared, pinfo, res, canAutoCommit)
return err
in pkg/sql.(*connExecutor).execStmt.func1
} else {
err = op(ctx)
}
in pkg/sql.(*connExecutor).execWithProfiling
case stateOpen:
err = ex.execWithProfiling(ctx, ast, prepared, func(ctx context.Context) error {
ev, payload, err = ex.execStmtInOpenState(ctx, parserStmt, prepared, pinfo, res, canAutoCommit)
in pkg/sql.(*connExecutor).execStmt
}
ev, payload, err = ex.execStmt(ctx, portal.Stmt.Statement, portal.Stmt, pinfo, stmtRes, canAutoCommit)
// Portal suspension is supported via a "side" state machine
in pkg/sql.(*connExecutor).execPortal
canAutoCommit := ex.implicitTxn() && tcmd.FollowedBySync
ev, payload, err = ex.execPortal(ctx, portal, portalName, stmtRes, pinfo, canAutoCommit)
return err
in pkg/sql.(*connExecutor).execCmd.func2
return err
}()
// Note: we write to ex.statsCollector.phaseTimes, instead of ex.phaseTimes,
in pkg/sql.(*connExecutor).execCmd
var err error
if err = ex.execCmd(); err != nil {
if errors.IsAny(err, io.EOF, errDrainingComplete) {
in pkg/sql.(*connExecutor).run
}(ctx, h)
return h.ex.run(ctx, s.pool, reserved, cancel)
}
in pkg/sql.(*Server).ServeConn
reservedOwned = false // We're about to pass ownership away.
retErr = sqlServer.ServeConn(ctx, connHandler, reserved, cancelConn)
}()
in pkg/sql/pgwire.(*conn).processCommandsAsync.func1

pkg/sql/table.go in pkg/sql.(*planner).createOrUpdateSchemaChangeJob at line 203
pkg/sql/table.go in pkg/sql.(*planner).writeSchemaChange at line 251
pkg/sql/create_index.go in pkg/sql.(*createIndexNode).startExec at line 812
pkg/sql/plan.go in pkg/sql.startExec.func2 at line 518
pkg/sql/walk.go in pkg/sql.(*planVisitor).visitInternal.func1 at line 112
pkg/sql/walk.go in pkg/sql.(*planVisitor).visitInternal at line 297
pkg/sql/walk.go in pkg/sql.(*planVisitor).visit at line 79
pkg/sql/walk.go in pkg/sql.walkPlan at line 43
pkg/sql/plan.go in pkg/sql.startExec at line 521
pkg/sql/plan_node_to_row_source.go in pkg/sql.(*planNodeToRowSource).Start at line 147
pkg/sql/colexec/columnarizer.go in pkg/sql/colexec.(*Columnarizer).Init at line 178
pkg/sql/colflow/stats.go in pkg/sql/colflow.(*batchInfoCollector).Init at line 90
pkg/sql/colflow/flow_coordinator.go in pkg/sql/colflow.(*BatchFlowCoordinator).init.func1 at line 247
pkg/sql/colexecerror/error.go in pkg/sql/colexecerror.CatchVectorizedRuntimeError at line 92
pkg/sql/colflow/flow_coordinator.go in pkg/sql/colflow.(*BatchFlowCoordinator).init at line 246
pkg/sql/colflow/flow_coordinator.go in pkg/sql/colflow.(*BatchFlowCoordinator).Run at line 291
pkg/sql/colflow/vectorized_flow.go in pkg/sql/colflow.(*vectorizedFlow).Run at line 320
pkg/sql/distsql_running.go in pkg/sql.(*DistSQLPlanner).Run at line 695
pkg/sql/distsql_running.go in pkg/sql.(*DistSQLPlanner).PlanAndRun at line 1611
pkg/sql/distsql_running.go in pkg/sql.(*DistSQLPlanner).PlanAndRunAll at line 1334
pkg/sql/conn_executor_exec.go in pkg/sql.(*connExecutor).execWithDistSQLEngine at line 1541
pkg/sql/conn_executor_exec.go in pkg/sql.(*connExecutor).dispatchToExecutionEngine at line 1177
pkg/sql/conn_executor_exec.go in pkg/sql.(*connExecutor).execStmtInOpenState at line 687
pkg/sql/conn_executor_exec.go in pkg/sql.(*connExecutor).execStmt.func1 at line 129
pkg/sql/conn_executor_exec.go in pkg/sql.(*connExecutor).execWithProfiling at line 2382
pkg/sql/conn_executor_exec.go in pkg/sql.(*connExecutor).execStmt at line 128
pkg/sql/conn_executor_exec.go in pkg/sql.(*connExecutor).execPortal at line 218
pkg/sql/conn_executor.go in pkg/sql.(*connExecutor).execCmd.func2 at line 1998
pkg/sql/conn_executor.go in pkg/sql.(*connExecutor).execCmd at line 2000
pkg/sql/conn_executor.go in pkg/sql.(*connExecutor).run at line 1846
pkg/sql/conn_executor.go in pkg/sql.(*Server).ServeConn at line 828
pkg/sql/pgwire/conn.go in pkg/sql/pgwire.(*conn).processCommandsAsync.func1 at line 728
Tag Value
Cockroach Release v22.2.0
Cockroach SHA: 77667a1
Platform linux amd64
Distribution CCL
Environment v22.2.0
Command server
Go Version ``
# of CPUs
# of Goroutines

Jira issue: CRDB-22283

@cockroach-teamcity cockroach-teamcity added C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. O-sentry Originated from an in-the-wild panic report. labels Dec 9, 2022
@yuzefovich
Copy link
Member

dup of #82921

@exalate-issue-sync exalate-issue-sync bot changed the title sentry: table.go:203: attempted to update job for mutation 2, but job already exists with mutation 1 (1) Wraps: (2) assertion failure Wraps: (3) attached stack trace -- stack trace: | github.com/cockroach... sentry: table.go:203: attempted to update job for mutation 2, but job already exists with mutation 1 (1) Wraps: (2) assertion failure Wraps: (3) attached stack trace -- stack trace: | github.com/cockroach... Dec 27, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. O-sentry Originated from an in-the-wild panic report.
Projects
None yet
Development

No branches or pull requests

2 participants