Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
84865: kvserver: always return NLHE on lease acquisition timeouts r=nvanbenschoten a=erikgrinaker

In ab74b97 we added internal timeouts for lease acquisitions. These
were wrapped in `RunWithTimeout()`, as mandated for context timeouts.
However, this would mask the returned `NotLeaseHolderError` as a
`TimeoutError`, preventing the DistSender from retrying it and instead
propagating it out to the client. Additionally, context cancellation
errors from the actual RPC call were never wrapped as a
`NotLeaseHolderError` in the first place.

This ended up only happening in a very specific scenario where the outer
timeout added to the client context did not trigger, but the inner
timeout for the coalesced request context did trigger while the lease
request was in flight. Accidentally, the outer `RunWithTimeout()` call
did not return the `roachpb.Error` from the closure but instead passed
it via a captured variable, bypassing the error wrapping.

This patch replaces the `RunWithTimeout()` calls with regular
`context.WithTimeout()` calls to avoid the error wrapping, and returns a
`NotLeaseHolderError` from `requestLease()` if the RPC request fails and
the context was cancelled (presumably causing the error). Another option
would be to extract an NLHE from the error chain, but this would require
correct propagation of the structured error chain across RPC boundaries,
so out of an abundance of caution and with an eye towards backports, we
instead choose to return a bare `NotLeaseHolderError`.

The empty lease in the returned error prevents the DistSender from
updating its caches on context cancellation.

Resolves #84258.
Resolves #85115.

Release note (bug fix): Fixed a bug where clients could sometimes
receive errors due to lease acquisition timeouts of the form
`operation "storage.pendingLeaseRequest: requesting lease" timed out after 6s`.

84946: distsql: make the number of DistSQL runners dynamic r=yuzefovich a=yuzefovich

**distsql: make the number of DistSQL runners dynamic**

This commit improves the infrastructure around a pool of "DistSQL
runners" that are used for issuing SetupFlow RPCs in parallel.
Previously, we had a hard-coded number of 16 goroutines which was
probably insufficient in many cases. This commit makes it so that we use
the default value of `4 x N(cpus)` to make it proportional to how beefy
the node is (under the expectation that the larger the node is, the more
distributed queries it will be handling). The choice of the four as the
multiple was made so that we get the previous default on machines with
4 CPUs.

Additionally, this commit introduces a mechanism to dynamically adjust
the number of runners based on a cluster setting. Whenever the setting
is reduced, some of the workers are stopped, if the setting is
increased, then new workers are spun up accordingly. This coordinator
listens on two channels: one about the server quescing, and another
about the new target pool size. Whenever a new target size is received,
the coordinator will spin up / shut down one worker at a time until that
target size is achieved. The worker, however, doesn't access the server
quescing channel and, instead, relies on the coordinator to tell it to
exit (either by closing the channel when quescing or sending a single
message when the target size is decreased).

Fixes: #84459.

Release note: None

**distsql: change the flow setup code a bit**

Previously, when setting up a distributed plan, we would wait for all
SetupFlow RPCs to come back before setting up the flow on the gateway.
Most likely (in the happy scenario) all those RPCs would be successful,
so we can parallelize the happy path a bit by setting up the local flow
while the RPCs are in-flight which is what this commit does. This seems
especially beneficial given the change in the previous commit to
increase the number of DistSQL runners for beefy machines - we are now
more likely to issue SetupFlow RPCs asynchronously.

Release note: None

85091: flowinfra: disable queueing mechanism of the flow scheduler by default r=yuzefovich a=yuzefovich

This commit disables the queueing mechanism of the flow scheduler as
part of the effort to remove that queueing altogether during 23.1
release cycle. To get there though we choose a conservative approach of
introducing a cluster setting that determines whether the queueing is
enabled or not, and if it is disabled, then we effectively a treating
`sql.distsql.max_running_flows` limit as infinite. By default, the
queueing is now disabled since recent experiments have shown that the
admission control does a good job of protecting the nodes from the
influx of remote flows.

Addresses: #34229.

Release note: None

85134: sql: allow NULL in create view definition r=mgartner a=rafiss

fixes #84000

Release note (sql change): CREATE VIEW statements can now have a
constant NULL column definition. The resulting column is of type TEXT.

85178: kvserver: record batch requests with no gateway r=kvoli a=kvoli

Previously, batch requests with no `GatewayNodeID` would not be
accounted for on the QPS of a replica. By extension, the store QPS would
also not aggregate this missing QPS over replicas it holds. This patch
introduces tracking for all requests, regardless of the `GatewayNodeID`.

This was done to as follow the workload lease transfers consider the
per-locality counts, therefore untagged localities were not useful. This
has since been updated to ignore filter out localities directly, so it
is not necessary to exclude them anymore.

`leaseholderStats`, which previously tracked the QPS, and `writeStats`
tracking the mvcc keys written, have also been removed. They are
duplicated in `batchRequest` and `writeKeys` respectively, within the
`loadStats` of a replica.

resolves #85157

Release note: None

85355: sql: improve physical planning of window functions r=yuzefovich a=yuzefovich

**sql: remove shouldNotDistribute recommendation**

It doesn't seem to be used much.

Release note: None

**sql: improve physical planning of window functions**

This commit improves the physical planning of window functions in
several ways.

First, the optimizer is updated so that all window functions with a
PARTITION BY clause are constructed first followed by the remaining
window functions without PARTITION BY. This is needed by the execution
which can only evaluate functions with PARTITION BY in the distributed
fashion - as a result of this change, we are now more likely to get
partial distributed execution (previously things depended on the order
in which window functions were mentioned in the query).

Second, the physical planner now thinks that we "should distribute" the
plan if it finds at least one window function with PARTITION BY clause.
Previously, we didn't make any recommendation about the distribution
based on the presence of the window functions (i.e. we relied on the
rest of the plan to do so), but they can be quite computation-intensive,
so whenever we can distribute the execution, we should do so.

Additionally, this commit removes some of the code in the physical
planner which tries to find window functions with the same PARTITION BY
and ORDER BY clauses - that code has been redundant for long time given
that the optimizer does that too.

Release note: None

85366: sql,logictest,descidgen: abstract descriptor ID generation, make deterministic in logictests r=ajwerner a=ajwerner

The first commit adds an interface for descriptor ID generation and propagates the interface from the ExecCfg into the EvalContext. There are some minor refactoring to avoid propagating an ExecCfg further up the stack by making the parameters more specific. The second commit adds a testing knob to use a transactional implementation in the EvalContext.

Fixes #37751
Fixes #69226

85406: schemachanger: check explain diagrams during rollback test r=postamar a=postamar

This commit enriches the declarative schema changer integration tests by
making data-driven EXPLAIN output assertions easier to add as
a complement to otherwise unrelated tests. In particular, this commit
improves the rollback test to check the explained rollback plan for each
post-commit revertible stage. This should make it easier to debug bad
rule definitions which otherwise would manifest themselves as causing
the schema change to hang during the rollback.

Release note: None

85414: colflow: fix a recent flake r=yuzefovich a=yuzefovich

In 0866ddc we merged a change that
relied on the assumption that the allocator passed to the parallel
unordered synchronizer was not used by anyone else, but this assumption
was broken in a test and is now fixed.

Fixes: #85360.

Release note: None

Co-authored-by: Erik Grinaker <[email protected]>
Co-authored-by: Yahor Yuzefovich <[email protected]>
Co-authored-by: Rafi Shamim <[email protected]>
Co-authored-by: Austen McClernon <[email protected]>
Co-authored-by: Andrew Werner <[email protected]>
Co-authored-by: Marius Posta <[email protected]>
  • Loading branch information
7 people committed Aug 1, 2022
10 parents 590049f + 067e740 + 27ade23 + 77c4673 + 459a4f3 + 0fea845 + 561383e + 85ce24d + 04769da + 1d20a8c commit 314baa5
Show file tree
Hide file tree
Showing 255 changed files with 20,452 additions and 611 deletions.
2 changes: 0 additions & 2 deletions pkg/ccl/backupccl/BUILD.bazel
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,6 @@ go_library(
"//pkg/sql/catalog/colinfo",
"//pkg/sql/catalog/dbdesc",
"//pkg/sql/catalog/descbuilder",
"//pkg/sql/catalog/descidgen",
"//pkg/sql/catalog/descpb",
"//pkg/sql/catalog/descs",
"//pkg/sql/catalog/ingesting",
Expand Down Expand Up @@ -224,7 +223,6 @@ go_test(
"//pkg/sql/catalog/bootstrap",
"//pkg/sql/catalog/catalogkeys",
"//pkg/sql/catalog/catprivilege",
"//pkg/sql/catalog/descidgen",
"//pkg/sql/catalog/descpb",
"//pkg/sql/catalog/descs",
"//pkg/sql/catalog/desctestutils",
Expand Down
4 changes: 2 additions & 2 deletions pkg/ccl/backupccl/backup_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,6 @@ import (
"github.com/cockroachdb/cockroach/pkg/sql"
"github.com/cockroachdb/cockroach/pkg/sql/catalog"
"github.com/cockroachdb/cockroach/pkg/sql/catalog/bootstrap"
"github.com/cockroachdb/cockroach/pkg/sql/catalog/descidgen"
"github.com/cockroachdb/cockroach/pkg/sql/catalog/descpb"
"github.com/cockroachdb/cockroach/pkg/sql/catalog/desctestutils"
"github.com/cockroachdb/cockroach/pkg/sql/catalog/systemschema"
Expand Down Expand Up @@ -1633,7 +1632,8 @@ func TestBackupRestoreResume(t *testing.T) {
sqlDB.Exec(t, `BACKUP DATABASE DATA TO $1`, restoreDir)
sqlDB.Exec(t, `CREATE DATABASE restoredb`)
restoreDatabaseID := sqlutils.QueryDatabaseID(t, sqlDB.DB, "restoredb")
restoreTableID, err := descidgen.GenerateUniqueDescID(ctx, tc.Servers[0].DB(), keys.SystemSQLCodec)
restoreTableID, err := tc.Server(0).ExecutorConfig().(sql.ExecutorConfig).
DescIDGenerator.GenerateUniqueDescID(ctx)
if err != nil {
t.Fatal(err)
}
Expand Down
4 changes: 2 additions & 2 deletions pkg/ccl/backupccl/datadriven_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,6 @@ import (
"github.com/cockroachdb/cockroach/pkg/sql"
"github.com/cockroachdb/cockroach/pkg/sql/catalog/catalogkeys"
"github.com/cockroachdb/cockroach/pkg/sql/catalog/catprivilege"
"github.com/cockroachdb/cockroach/pkg/sql/catalog/descidgen"
"github.com/cockroachdb/cockroach/pkg/sql/catalog/systemschema"
"github.com/cockroachdb/cockroach/pkg/sql/catalog/tabledesc"
"github.com/cockroachdb/cockroach/pkg/testutils"
Expand Down Expand Up @@ -679,7 +678,8 @@ func TestDataDriven(t *testing.T) {
codec := ds.servers[lastCreatedServer].ExecutorConfig().(sql.ExecutorConfig).Codec
dummyTable := systemschema.SettingsTable
err := db.Txn(ctx, func(ctx context.Context, txn *kv.Txn) error {
id, err := descidgen.GenerateUniqueDescID(ctx, db, codec)
id, err := ds.servers[lastCreatedServer].ExecutorConfig().(sql.ExecutorConfig).
DescIDGenerator.GenerateUniqueDescID(ctx)
if err != nil {
return err
}
Expand Down
3 changes: 1 addition & 2 deletions pkg/ccl/backupccl/restore_job.go
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,6 @@ import (
"github.com/cockroachdb/cockroach/pkg/sql/catalog/catalogkeys"
"github.com/cockroachdb/cockroach/pkg/sql/catalog/catpb"
"github.com/cockroachdb/cockroach/pkg/sql/catalog/dbdesc"
"github.com/cockroachdb/cockroach/pkg/sql/catalog/descidgen"
"github.com/cockroachdb/cockroach/pkg/sql/catalog/descpb"
"github.com/cockroachdb/cockroach/pkg/sql/catalog/descs"
"github.com/cockroachdb/cockroach/pkg/sql/catalog/ingesting"
Expand Down Expand Up @@ -1165,7 +1164,7 @@ func remapPublicSchemas(
// if the database does not have a public schema backed by a descriptor
// (meaning they were created before 22.1), we need to create a public
// schema descriptor for it.
id, err := descidgen.GenerateUniqueDescID(ctx, p.ExecCfg().DB, p.ExecCfg().Codec)
id, err := p.ExecCfg().DescIDGenerator.GenerateUniqueDescID(ctx)
if err != nil {
return err
}
Expand Down
13 changes: 7 additions & 6 deletions pkg/ccl/backupccl/restore_planning.go
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,6 @@ import (
"github.com/cockroachdb/cockroach/pkg/sql/catalog/colinfo"
"github.com/cockroachdb/cockroach/pkg/sql/catalog/dbdesc"
"github.com/cockroachdb/cockroach/pkg/sql/catalog/descbuilder"
"github.com/cockroachdb/cockroach/pkg/sql/catalog/descidgen"
"github.com/cockroachdb/cockroach/pkg/sql/catalog/descpb"
"github.com/cockroachdb/cockroach/pkg/sql/catalog/descs"
"github.com/cockroachdb/cockroach/pkg/sql/catalog/multiregion"
Expand Down Expand Up @@ -153,7 +152,7 @@ func synthesizePGTempSchema(
return errors.Newf("attempted to synthesize temp schema during RESTORE but found"+
" another schema already using the same schema key %s", schemaName)
}
synthesizedSchemaID, err = descidgen.GenerateUniqueDescID(ctx, p.ExecCfg().DB, p.ExecCfg().Codec)
synthesizedSchemaID, err = p.ExecCfg().DescIDGenerator.GenerateUniqueDescID(ctx)
if err != nil {
return err
}
Expand Down Expand Up @@ -517,7 +516,7 @@ func allocateDescriptorRewrites(
}
}

tempSysDBID, err := descidgen.GenerateUniqueDescID(ctx, p.ExecCfg().DB, p.ExecCfg().Codec)
tempSysDBID, err := p.ExecCfg().DescIDGenerator.GenerateUniqueDescID(ctx)
if err != nil {
return nil, err
}
Expand Down Expand Up @@ -874,7 +873,7 @@ func allocateDescriptorRewrites(
if descriptorCoverage == tree.AllDescriptors {
newID = db.GetID()
} else {
newID, err = descidgen.GenerateUniqueDescID(ctx, p.ExecCfg().DB, p.ExecCfg().Codec)
newID, err = p.ExecCfg().DescIDGenerator.GenerateUniqueDescID(ctx)
if err != nil {
return nil, err
}
Expand Down Expand Up @@ -945,7 +944,7 @@ func allocateDescriptorRewrites(
// Generate new IDs for the schemas, tables, and types that need to be
// remapped.
for _, desc := range descriptorsToRemap {
id, err := descidgen.GenerateUniqueDescID(ctx, p.ExecCfg().DB, p.ExecCfg().Codec)
id, err := p.ExecCfg().DescIDGenerator.GenerateUniqueDescID(ctx)
if err != nil {
return nil, err
}
Expand Down Expand Up @@ -2218,7 +2217,9 @@ func planDatabaseModifiersForRestore(
if defaultPrimaryRegion == "" {
return nil, nil, nil
}
if err := multiregionccl.CheckClusterSupportsMultiRegion(p.ExecCfg()); err != nil {
if err := multiregionccl.CheckClusterSupportsMultiRegion(
p.ExecCfg().Settings, p.ExecCfg().NodeInfo.LogicalClusterID(), p.ExecCfg().Organization(),
); err != nil {
return nil, nil, errors.WithHintf(
err,
"try disabling the default PRIMARY REGION by using RESET CLUSTER SETTING %s",
Expand Down
1 change: 1 addition & 0 deletions pkg/ccl/changefeedccl/changefeed_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -3436,6 +3436,7 @@ func TestChangefeedJobRetryOnNoInboundStream(t *testing.T) {
// force fast "no inbound stream" error
var oldMaxRunningFlows int
var oldTimeout string
sqlDB.Exec(t, "SET CLUSTER SETTING sql.distsql.flow_scheduler_queueing.enabled = true")
sqlDB.QueryRow(t, "SHOW CLUSTER SETTING sql.distsql.max_running_flows").Scan(&oldMaxRunningFlows)
sqlDB.QueryRow(t, "SHOW CLUSTER SETTING sql.distsql.flow_stream_timeout").Scan(&oldTimeout)
serverutils.SetClusterSetting(t, cluster, "sql.distsql.max_running_flows", 0)
Expand Down
4 changes: 3 additions & 1 deletion pkg/ccl/multiregionccl/BUILD.bazel
Original file line number Diff line number Diff line change
Expand Up @@ -8,15 +8,17 @@ go_library(
visibility = ["//visibility:public"],
deps = [
"//pkg/ccl/utilccl",
"//pkg/settings/cluster",
"//pkg/sql",
"//pkg/sql/catalog/catpb",
"//pkg/sql/catalog/descidgen",
"//pkg/sql/catalog/descpb",
"//pkg/sql/catalog/multiregion",
"//pkg/sql/catalog/typedesc",
"//pkg/sql/pgwire/pgcode",
"//pkg/sql/pgwire/pgerror",
"//pkg/sql/sem/eval",
"//pkg/sql/sem/tree",
"//pkg/util/uuid",
],
)

Expand Down
25 changes: 17 additions & 8 deletions pkg/ccl/multiregionccl/multiregion.go
Original file line number Diff line number Diff line change
Expand Up @@ -13,15 +13,17 @@ import (
"sort"

"github.com/cockroachdb/cockroach/pkg/ccl/utilccl"
"github.com/cockroachdb/cockroach/pkg/settings/cluster"
"github.com/cockroachdb/cockroach/pkg/sql"
"github.com/cockroachdb/cockroach/pkg/sql/catalog/catpb"
"github.com/cockroachdb/cockroach/pkg/sql/catalog/descidgen"
"github.com/cockroachdb/cockroach/pkg/sql/catalog/descpb"
"github.com/cockroachdb/cockroach/pkg/sql/catalog/multiregion"
"github.com/cockroachdb/cockroach/pkg/sql/catalog/typedesc"
"github.com/cockroachdb/cockroach/pkg/sql/pgwire/pgcode"
"github.com/cockroachdb/cockroach/pkg/sql/pgwire/pgerror"
"github.com/cockroachdb/cockroach/pkg/sql/sem/eval"
"github.com/cockroachdb/cockroach/pkg/sql/sem/tree"
"github.com/cockroachdb/cockroach/pkg/util/uuid"
)

func init() {
Expand All @@ -31,15 +33,20 @@ func init() {

func initializeMultiRegionMetadata(
ctx context.Context,
execCfg *sql.ExecutorConfig,
descIDGenerator eval.DescIDGenerator,
settings *cluster.Settings,
clusterID uuid.UUID,
clusterOrganization string,
liveRegions sql.LiveClusterRegions,
goal tree.SurvivalGoal,
primaryRegion catpb.RegionName,
regions []tree.Name,
dataPlacement tree.DataPlacement,
secondaryRegion catpb.RegionName,
) (*multiregion.RegionConfig, error) {
if err := CheckClusterSupportsMultiRegion(execCfg); err != nil {
if err := CheckClusterSupportsMultiRegion(
settings, clusterID, clusterOrganization,
); err != nil {
return nil, err
}

Expand Down Expand Up @@ -94,7 +101,7 @@ func initializeMultiRegionMetadata(

// Generate a unique ID for the multi-region enum type descriptor here as
// well.
regionEnumID, err := descidgen.GenerateUniqueDescID(ctx, execCfg.DB, execCfg.Codec)
regionEnumID, err := descIDGenerator.GenerateUniqueDescID(ctx)
if err != nil {
return nil, err
}
Expand All @@ -117,11 +124,13 @@ func initializeMultiRegionMetadata(

// CheckClusterSupportsMultiRegion returns whether the current cluster supports
// multi-region features.
func CheckClusterSupportsMultiRegion(execCfg *sql.ExecutorConfig) error {
func CheckClusterSupportsMultiRegion(
settings *cluster.Settings, clusterID uuid.UUID, organization string,
) error {
return utilccl.CheckEnterpriseEnabled(
execCfg.Settings,
execCfg.NodeInfo.LogicalClusterID(),
execCfg.Organization(),
settings,
clusterID,
organization,
"multi-region features",
)
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,6 @@ CREATE TABLE multi_region_test_db.public.table_regional_by_table (
+object {104 106 _crdb_internal_region} -> 107
+object {104 106 table_regional_by_table} -> 108


test
DROP DATABASE multi_region_test_db CASCADE
----
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,6 @@ CREATE TABLE multi_region_test_db.public.table_regional_by_row (
+object {104 106 _crdb_internal_region} -> 107
+object {104 106 table_regional_by_row} -> 108


test
DROP TABLE multi_region_test_db.public.table_regional_by_row;
----
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
/* setup */
CREATE TABLE defaultdb.t1 (id INT PRIMARY KEY, name VARCHAR(256), money INT);

/* test */
CREATE INDEX id1
ON defaultdb.t1 (id, name)
STORING (money)
PARTITION BY LIST (id) (PARTITION p1 VALUES IN (1));
EXPLAIN (ddl) rollback at post-commit stage 1 of 7;
----
Schema change plan for rolling back CREATE INDEX ‹id1› ON ‹defaultdb›.public.‹t1› (‹id›, ‹name›) STORING (‹money›) PARTITION BY LIST (‹id›) (PARTITION ‹p1› VALUES IN (‹1›));
└── PostCommitNonRevertiblePhase
└── Stage 1 of 1 in PostCommitNonRevertiblePhase
├── 10 elements transitioning toward ABSENT
│ ├── PUBLIC → ABSENT IndexColumn:{DescID: 104, ColumnID: 1, IndexID: 2}
│ ├── PUBLIC → ABSENT IndexColumn:{DescID: 104, ColumnID: 2, IndexID: 2}
│ ├── PUBLIC → ABSENT IndexColumn:{DescID: 104, ColumnID: 3, IndexID: 2}
│ ├── BACKFILL_ONLY → ABSENT SecondaryIndex:{DescID: 104, IndexID: 2, ConstraintID: 0, TemporaryIndexID: 3, SourceIndexID: 1}
│ ├── PUBLIC → ABSENT IndexPartitioning:{DescID: 104, IndexID: 2}
│ ├── DELETE_ONLY → ABSENT TemporaryIndex:{DescID: 104, IndexID: 3, SourceIndexID: 1}
│ ├── PUBLIC → ABSENT IndexColumn:{DescID: 104, ColumnID: 1, IndexID: 3}
│ ├── PUBLIC → ABSENT IndexColumn:{DescID: 104, ColumnID: 2, IndexID: 3}
│ ├── PUBLIC → ABSENT IndexColumn:{DescID: 104, ColumnID: 3, IndexID: 3}
│ └── PUBLIC → ABSENT IndexPartitioning:{DescID: 104, IndexID: 3}
└── 13 Mutation operations
├── RemoveColumnFromIndex {"ColumnID":1,"IndexID":3,"TableID":104}
├── RemoveColumnFromIndex {"ColumnID":2,"IndexID":3,"Ordinal":1,"TableID":104}
├── RemoveColumnFromIndex {"ColumnID":3,"IndexID":3,"Kind":2,"TableID":104}
├── RemoveColumnFromIndex {"ColumnID":1,"IndexID":2,"TableID":104}
├── RemoveColumnFromIndex {"ColumnID":2,"IndexID":2,"Ordinal":1,"TableID":104}
├── RemoveColumnFromIndex {"ColumnID":3,"IndexID":2,"Kind":2,"TableID":104}
├── LogEvent {"TargetStatus":1}
├── CreateGcJobForIndex {"IndexID":2,"TableID":104}
├── MakeIndexAbsent {"IndexID":2,"TableID":104}
├── CreateGcJobForIndex {"IndexID":3,"TableID":104}
├── MakeIndexAbsent {"IndexID":3,"TableID":104}
├── RemoveJobStateFromDescriptor {"DescriptorID":104}
└── UpdateSchemaChangerJob {"IsNonCancelable":true,"RunningStatus":"all stages compl..."}
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
/* setup */
CREATE TABLE defaultdb.t1 (id INT PRIMARY KEY, name VARCHAR(256), money INT);

/* test */
CREATE INDEX id1
ON defaultdb.t1 (id, name)
STORING (money)
PARTITION BY LIST (id) (PARTITION p1 VALUES IN (1));
EXPLAIN (ddl) rollback at post-commit stage 2 of 7;
----
Schema change plan for rolling back CREATE INDEX ‹id1› ON ‹defaultdb›.public.‹t1› (‹id›, ‹name›) STORING (‹money›) PARTITION BY LIST (‹id›) (PARTITION ‹p1› VALUES IN (‹1›));
└── PostCommitNonRevertiblePhase
├── Stage 1 of 2 in PostCommitNonRevertiblePhase
│ ├── 10 elements transitioning toward ABSENT
│ │ ├── PUBLIC → ABSENT IndexColumn:{DescID: 104, ColumnID: 1, IndexID: 2}
│ │ ├── PUBLIC → ABSENT IndexColumn:{DescID: 104, ColumnID: 2, IndexID: 2}
│ │ ├── PUBLIC → ABSENT IndexColumn:{DescID: 104, ColumnID: 3, IndexID: 2}
│ │ ├── BACKFILL_ONLY → ABSENT SecondaryIndex:{DescID: 104, IndexID: 2, ConstraintID: 0, TemporaryIndexID: 3, SourceIndexID: 1}
│ │ ├── PUBLIC → ABSENT IndexPartitioning:{DescID: 104, IndexID: 2}
│ │ ├── WRITE_ONLY → DELETE_ONLY TemporaryIndex:{DescID: 104, IndexID: 3, SourceIndexID: 1}
│ │ ├── PUBLIC → ABSENT IndexColumn:{DescID: 104, ColumnID: 1, IndexID: 3}
│ │ ├── PUBLIC → ABSENT IndexColumn:{DescID: 104, ColumnID: 2, IndexID: 3}
│ │ ├── PUBLIC → ABSENT IndexColumn:{DescID: 104, ColumnID: 3, IndexID: 3}
│ │ └── PUBLIC → ABSENT IndexPartitioning:{DescID: 104, IndexID: 3}
│ └── 12 Mutation operations
│ ├── MakeDroppedIndexDeleteOnly {"IndexID":3,"TableID":104}
│ ├── RemoveColumnFromIndex {"ColumnID":1,"IndexID":3,"TableID":104}
│ ├── RemoveColumnFromIndex {"ColumnID":2,"IndexID":3,"Ordinal":1,"TableID":104}
│ ├── RemoveColumnFromIndex {"ColumnID":3,"IndexID":3,"Kind":2,"TableID":104}
│ ├── RemoveColumnFromIndex {"ColumnID":1,"IndexID":2,"TableID":104}
│ ├── RemoveColumnFromIndex {"ColumnID":2,"IndexID":2,"Ordinal":1,"TableID":104}
│ ├── RemoveColumnFromIndex {"ColumnID":3,"IndexID":2,"Kind":2,"TableID":104}
│ ├── LogEvent {"TargetStatus":1}
│ ├── CreateGcJobForIndex {"IndexID":2,"TableID":104}
│ ├── MakeIndexAbsent {"IndexID":2,"TableID":104}
│ ├── SetJobStateOnDescriptor {"DescriptorID":104}
│ └── UpdateSchemaChangerJob {"IsNonCancelable":true,"RunningStatus":"PostCommitNonRev..."}
└── Stage 2 of 2 in PostCommitNonRevertiblePhase
├── 1 element transitioning toward ABSENT
│ └── DELETE_ONLY → ABSENT TemporaryIndex:{DescID: 104, IndexID: 3, SourceIndexID: 1}
└── 4 Mutation operations
├── CreateGcJobForIndex {"IndexID":3,"TableID":104}
├── MakeIndexAbsent {"IndexID":3,"TableID":104}
├── RemoveJobStateFromDescriptor {"DescriptorID":104}
└── UpdateSchemaChangerJob {"IsNonCancelable":true,"RunningStatus":"all stages compl..."}
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
/* setup */
CREATE TABLE defaultdb.t1 (id INT PRIMARY KEY, name VARCHAR(256), money INT);

/* test */
CREATE INDEX id1
ON defaultdb.t1 (id, name)
STORING (money)
PARTITION BY LIST (id) (PARTITION p1 VALUES IN (1));
EXPLAIN (ddl) rollback at post-commit stage 3 of 7;
----
Schema change plan for rolling back CREATE INDEX ‹id1› ON ‹defaultdb›.public.‹t1› (‹id›, ‹name›) STORING (‹money›) PARTITION BY LIST (‹id›) (PARTITION ‹p1› VALUES IN (‹1›));
└── PostCommitNonRevertiblePhase
├── Stage 1 of 2 in PostCommitNonRevertiblePhase
│ ├── 10 elements transitioning toward ABSENT
│ │ ├── PUBLIC → ABSENT IndexColumn:{DescID: 104, ColumnID: 1, IndexID: 2}
│ │ ├── PUBLIC → ABSENT IndexColumn:{DescID: 104, ColumnID: 2, IndexID: 2}
│ │ ├── PUBLIC → ABSENT IndexColumn:{DescID: 104, ColumnID: 3, IndexID: 2}
│ │ ├── BACKFILL_ONLY → ABSENT SecondaryIndex:{DescID: 104, IndexID: 2, ConstraintID: 0, TemporaryIndexID: 3, SourceIndexID: 1}
│ │ ├── PUBLIC → ABSENT IndexPartitioning:{DescID: 104, IndexID: 2}
│ │ ├── WRITE_ONLY → DELETE_ONLY TemporaryIndex:{DescID: 104, IndexID: 3, SourceIndexID: 1}
│ │ ├── PUBLIC → ABSENT IndexColumn:{DescID: 104, ColumnID: 1, IndexID: 3}
│ │ ├── PUBLIC → ABSENT IndexColumn:{DescID: 104, ColumnID: 2, IndexID: 3}
│ │ ├── PUBLIC → ABSENT IndexColumn:{DescID: 104, ColumnID: 3, IndexID: 3}
│ │ └── PUBLIC → ABSENT IndexPartitioning:{DescID: 104, IndexID: 3}
│ └── 12 Mutation operations
│ ├── MakeDroppedIndexDeleteOnly {"IndexID":3,"TableID":104}
│ ├── RemoveColumnFromIndex {"ColumnID":1,"IndexID":3,"TableID":104}
│ ├── RemoveColumnFromIndex {"ColumnID":2,"IndexID":3,"Ordinal":1,"TableID":104}
│ ├── RemoveColumnFromIndex {"ColumnID":3,"IndexID":3,"Kind":2,"TableID":104}
│ ├── RemoveColumnFromIndex {"ColumnID":1,"IndexID":2,"TableID":104}
│ ├── RemoveColumnFromIndex {"ColumnID":2,"IndexID":2,"Ordinal":1,"TableID":104}
│ ├── RemoveColumnFromIndex {"ColumnID":3,"IndexID":2,"Kind":2,"TableID":104}
│ ├── LogEvent {"TargetStatus":1}
│ ├── CreateGcJobForIndex {"IndexID":2,"TableID":104}
│ ├── MakeIndexAbsent {"IndexID":2,"TableID":104}
│ ├── SetJobStateOnDescriptor {"DescriptorID":104}
│ └── UpdateSchemaChangerJob {"IsNonCancelable":true,"RunningStatus":"PostCommitNonRev..."}
└── Stage 2 of 2 in PostCommitNonRevertiblePhase
├── 1 element transitioning toward ABSENT
│ └── DELETE_ONLY → ABSENT TemporaryIndex:{DescID: 104, IndexID: 3, SourceIndexID: 1}
└── 4 Mutation operations
├── CreateGcJobForIndex {"IndexID":3,"TableID":104}
├── MakeIndexAbsent {"IndexID":3,"TableID":104}
├── RemoveJobStateFromDescriptor {"DescriptorID":104}
└── UpdateSchemaChangerJob {"IsNonCancelable":true,"RunningStatus":"all stages compl..."}
Loading

0 comments on commit 314baa5

Please sign in to comment.