Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
…db#79541 cockroachdb#79561 cockroachdb#79584

79465: changefeedccl: error on duplicate targets r=HonoreDB a=HonoreDB

Previously, behavior if you gave two equivalent targets in a changefeed
was undefined. This PR adds validation and an error.

For example,
`use defaultdb; CREATE CHANGEFEED FOR defaultdb.foo, TABLE foo` will
error and give the canonicalized targets that resolved to the same
table. You can still do `FOR db1.foo, db2.foo` as those will resolve
to different tables even though they may emit to the same topic.

Closes cockroachdb#78285

Release note (sql change): Changefeed statements now detect duplicate targets and throw an error.

79523: backupccl: ensure user passes locality aware uris with incremental_location r=benbardin a=msbutler

Release note (sql change): This patch ensures the user passes the same number
of locality aware URIs for the full backup destination as the
incremental_location parameter. I.e.

Good:
`BACKUP INTO LATEST IN ($1, $2, $3) WITH incremental_location = ($4, $5, $6)`

Bad:
`BACKUP INTO LATEST IN $1 WITH incremental_location = ($2, $3, $4)`

Note that the non locality uris for the full backup don't really affect
incremental backup planning -- the patch just adds guardrails to the UX. Further
work will ensure users cannot create incremental backup chains with mixed
localities (cockroachdb#79135)

79538: spanconfigmanager: create auto span config reconciliation job as node r=arulajmani a=arulajmani

Previously, we were creating this thing as the root user. Node is more
appropriate here given the node is acting on its own behalf, instead of
the job being created by an actual end-user.

Release note: None

79541: bazel: provide opt-out for `crdb_test` configuration, update nightly r=mari-crl,cucaroach a=rickystewart

Up until this point we've forced running in the `test` configuration
when running `bazel test`; we simply provide a `crdb_test_off`
configuration to turn that logic off if requested.

Also update the one nightly that needs to consume this.

Closes cockroachdb#79478.

Release note: None

79561: sql: fix bug in ALTER INDEX ... PARTITION BY r=ajwerner a=ajwerner

There was a subtle bug here whereby we'd increment the version an
extra time when running ALTER INDEX ... PARTITION BY in the same
transaction as a previous DDL to the same table. It's not exactly
clear what the implications here would be. They are subtle.

Release note: None

79584: dev: fix staging error post-`dev build tests` r=irfansharif a=irfansharif

We were treating the test target as a `go_binary`, therefore trying to
stage it.

Release note: None

Co-authored-by: Aaron Zinger <[email protected]>
Co-authored-by: Michael Butler <[email protected]>
Co-authored-by: arulajmani <[email protected]>
Co-authored-by: Ricky Stewart <[email protected]>
Co-authored-by: Andrew Werner <[email protected]>
Co-authored-by: irfan sharif <[email protected]>
  • Loading branch information
7 people committed Apr 7, 2022
7 parents 4940bc1 + 2111741 + b99c29e + d0469e4 + d1c32c9 + 9351707 + 74c39f4 commit b038da1
Show file tree
Hide file tree
Showing 15 changed files with 137 additions and 36 deletions.
2 changes: 2 additions & 0 deletions .bazelrc
Original file line number Diff line number Diff line change
@@ -1,12 +1,14 @@
# Define a set up flag aliases, so people can use `--cross` instead of the
# longer `//build/toolchains:cross_flag`.
build --flag_alias=crdb_test=//build/toolchains:crdb_test_flag
build --flag_alias=crdb_test_off=//build/toolchains:crdb_test_off_flag
build --flag_alias=cross=//build/toolchains:cross_flag
build --flag_alias=dev=//build/toolchains:dev_flag
build --flag_alias=lintonbuild=//build/toolchains:nogo_flag
build --flag_alias=nolintonbuild=//build/toolchains:nonogo_explicit_flag
build --flag_alias=with_ui=//pkg/ui:with_ui_flag

build:crdb_test_off --crdb_test_off
build:cross --cross
build:dev --dev
build:lintonbuild --lintonbuild
Expand Down
5 changes: 2 additions & 3 deletions build/teamcity/cockroach/nightlies/sqlite_logic_test_impl.sh
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,10 @@ bazel build //pkg/cmd/bazci //pkg/cmd/github-post //pkg/cmd/testfilter --config=
BAZEL_BIN=$(bazel info bazel-bin --config=ci)
GO_TEST_JSON_OUTPUT_FILE=/artifacts/test.json.txt
exit_status=0
$BAZEL_BIN/pkg/cmd/bazci/bazci_/bazci --config=ci \
$BAZEL_BIN/pkg/cmd/bazci/bazci_/bazci --config=ci --config=crdb_test_off \
test //pkg/sql/logictest:logictest_test -- \
--test_arg -bigtest --test_arg -flex-types --test_arg -parallel=4 \
--define gotags=bazel,crdb_test_off --test_timeout 86400 \
--test_filter '^TestSqlLiteLogic$|^TestTenantSQLLiteLogic$' \
--test_timeout 86400 --test_filter '^TestSqlLiteLogic$|^TestTenantSQLLiteLogic$' \
--test_env=GO_TEST_JSON_OUTPUT_FILE=$GO_TEST_JSON_OUTPUT_FILE || exit_status=$?
process_test_json \
$BAZEL_BIN/pkg/cmd/testfilter/testfilter_/testfilter \
Expand Down
15 changes: 14 additions & 1 deletion build/toolchains/BUILD.bazel
Original file line number Diff line number Diff line change
Expand Up @@ -289,17 +289,30 @@ platform(

# There are aliases for each of these flags defined in .bazelrc; for example,
# --crdb_test instead of --//build/toolchains:crdb_test_flag.

#
# crdb_test_flag is set to true for every `bazel test` invocation (see .bazelrc).
# When building a test executable via `bazel build`, you want to make sure you
# set this flag (via `--config test` or `--crdb_test`) or else the executable
# won't be compiled with the appropriate test logic.
# crdb_test_off_flag is provided as an override to disable this default behavior
# if desired. It's unnecessary under any other circumstances.
bool_flag(
name = "crdb_test_flag",
build_setting_default = False,
visibility = ["//visibility:public"],
)

bool_flag(
name = "crdb_test_off_flag",
build_setting_default = False,
visibility = [":__pkg__"],
)

config_setting(
name = "crdb_test",
flag_values = {
":crdb_test_flag": "true",
":crdb_test_off_flag": "false",
},
)

Expand Down
2 changes: 1 addition & 1 deletion dev
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
set -euo pipefail

# Bump this counter to force rebuilding `dev` on all machines.
DEV_VERSION=28
DEV_VERSION=29

THIS_DIR=$(cd "$(dirname "$0")" && pwd)
BINARY_DIR=$THIS_DIR/bin/dev-versions
Expand Down
4 changes: 4 additions & 0 deletions pkg/ccl/backupccl/backup_planning.go
Original file line number Diff line number Diff line change
Expand Up @@ -604,6 +604,10 @@ func backupPlanHook(
if !backupStmt.Nested && len(incrementalStorage) > 0 {
return errors.New("incremental_location option not supported with `BACKUP TO` syntax")
}
if len(incrementalStorage) > 0 && (len(incrementalStorage) != len(to)) {
return errors.New("the incremental_location option must contain the same number of locality" +
" aware URIs as the full backup destination")
}

endTime := p.ExecCfg().Clock.Now()
if backupStmt.AsOf.Expr != nil {
Expand Down
10 changes: 7 additions & 3 deletions pkg/ccl/backupccl/backup_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -653,7 +653,11 @@ func TestBackupAndRestoreJobDescription(t *testing.T) {
sqlDB.Exec(t, "BACKUP INTO ($1, $2, $3)", collections...)
sqlDB.Exec(t, "BACKUP INTO LATEST IN ($1, $2, $3)", collections...)
sqlDB.Exec(t, "BACKUP INTO $4 IN ($1, $2, $3)", append(collections, "subdir")...)
sqlDB.Exec(t, "BACKUP INTO LATEST IN $4 WITH incremental_location = ($1, $2, $3)",
sqlDB.Exec(t, "BACKUP INTO LATEST IN ($1, $2, $3) WITH incremental_location = ($4, $5, $6)",
append(collections, incrementals...)...)

sqlDB.ExpectErr(t, "the incremental_location option must contain the same number of locality",
"BACKUP INTO LATEST IN $4 WITH incremental_location = ($1, $2, $3)",
append(incrementals, collections[0])...)

// Find the subdirectory created by the full BACKUP INTO statement.
Expand All @@ -675,8 +679,8 @@ func TestBackupAndRestoreJobDescription(t *testing.T) {
collections[0], collections[1], collections[2])},
{fmt.Sprintf("BACKUP INTO '%s' IN ('%s', '%s', '%s')", "/subdir",
collections[0], collections[1], collections[2])},
{fmt.Sprintf("BACKUP INTO '%s' IN '%s' WITH incremental_location = ('%s', '%s', '%s')",
"/subdir", collections[0], incrementals[0],
{fmt.Sprintf("BACKUP INTO '%s' IN ('%s', '%s', '%s') WITH incremental_location = ('%s', '%s', '%s')",
"/subdir", collections[0], collections[1], collections[2], incrementals[0],
incrementals[1], incrementals[2])},
},
)
Expand Down
10 changes: 9 additions & 1 deletion pkg/ccl/changefeedccl/changefeed_stmt.go
Original file line number Diff line number Diff line change
Expand Up @@ -581,10 +581,11 @@ func getTargetsAndTables(
}
}
}
seen := make(map[jobspb.ChangefeedTargetSpecification]tree.ChangefeedTarget)
for i, ct := range rawTargets {
desc, ok := targetDescs[ct.TableName]
if !ok {
return nil, nil, errors.Newf("could not match %v to a fetched descriptor. fetched were %v", ct.TableName, targetDescs)
return nil, nil, errors.Newf("could not match %v to a fetched descriptor. Fetched were %v", ct.TableName, targetDescs)
}
td, ok := desc.(catalog.TableDescriptor)
if !ok {
Expand All @@ -604,6 +605,13 @@ func getTargetsAndTables(
FamilyName: string(ct.FamilyName),
StatementTimeName: tables[td.GetID()].StatementTimeName,
}
if dup, isDup := seen[targets[i]]; isDup {
return nil, nil, errors.Errorf(
"CHANGEFEED targets %s and %s are duplicates",
tree.AsString(&dup), tree.AsString(&ct),
)
}
seen[targets[i]] = ct
}
return targets, tables, nil
}
Expand Down
15 changes: 15 additions & 0 deletions pkg/ccl/changefeedccl/changefeed_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -3506,6 +3506,21 @@ func TestChangefeedErrors(t *testing.T) {
`EXPERIMENTAL CHANGEFEED FOR vw`,
)

sqlDB.ExpectErr(
t, `CHANGEFEED targets TABLE foo and TABLE foo are duplicates`,
`EXPERIMENTAL CHANGEFEED FOR foo, foo`,
)
sqlDB.ExpectErr(
t, `CHANGEFEED targets TABLE foo and TABLE defaultdb.foo are duplicates`,
`EXPERIMENTAL CHANGEFEED FOR foo, defaultdb.foo`,
)
sqlDB.Exec(t,
`CREATE TABLE threefams (a int, b int, c int, family f_a(a), family f_b(b), family f_c(c))`)
sqlDB.ExpectErr(
t, `CHANGEFEED targets TABLE foo FAMILY f_a and TABLE foo FAMILY f_a are duplicates`,
`EXPERIMENTAL CHANGEFEED FOR foo family f_a, foo FAMILY f_b, foo FAMILY f_a`,
)

// Backup has the same bad error message #28170.
sqlDB.ExpectErr(
t, `"information_schema.tables" does not exist`,
Expand Down
25 changes: 25 additions & 0 deletions pkg/ccl/logictestccl/testdata/logic_test/schema_change_in_txn
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Regression test for a situation involving creating a table in a transaction
# and altering the index when referenced by name.
subtest index_resolution_does_not_lead_to_new_version

statement ok
BEGIN;
CREATE DATABASE db;
CREATE TABLE db.t(i INT PRIMARY KEY, j INT, k INT);
CREATE INDEX idx_i ON db.t (i);
ALTER INDEX db.t@idx_i PARTITION BY LIST (i) (
PARTITION one_and_five VALUES IN (1, 5),
PARTITION everything_else VALUES IN (DEFAULT)
);
COMMIT;

# Before the change which introduced this test, it would erroneously return 2.
query I
SELECT (crdb_internal.pb_to_json('desc', descriptor)->'table'->>'version')::INT8
FROM system.descriptor
WHERE id = 'db.t'::regclass;
----
1

statement ok
DROP DATABASE db
30 changes: 16 additions & 14 deletions pkg/ccl/partitionccl/partition_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -1424,29 +1424,31 @@ func TestPrimaryKeyChangeZoneConfigs(t *testing.T) {

// Write a table with some partitions into the database,
// and change its primary key.
if _, err := sqlDB.Exec(`
CREATE DATABASE t;
USE t;
CREATE TABLE t (
for _, stmt := range []string{
`CREATE DATABASE t`,
`USE t`,
`CREATE TABLE t (
x INT PRIMARY KEY,
y INT NOT NULL,
z INT,
w INT,
INDEX i1 (z),
INDEX i2 (w),
FAMILY (x, y, z, w)
);
ALTER INDEX t@i1 PARTITION BY LIST (z) (
)`,
`ALTER INDEX t@i1 PARTITION BY LIST (z) (
PARTITION p1 VALUES IN (1)
);
ALTER INDEX t@i2 PARTITION BY LIST (w) (
)`,
`ALTER INDEX t@i2 PARTITION BY LIST (w) (
PARTITION p2 VALUES IN (3)
);
ALTER PARTITION p1 OF INDEX t@i1 CONFIGURE ZONE USING gc.ttlseconds = 15210;
ALTER PARTITION p2 OF INDEX t@i2 CONFIGURE ZONE USING gc.ttlseconds = 15213;
ALTER TABLE t ALTER PRIMARY KEY USING COLUMNS (y)
`); err != nil {
t.Fatal(err)
)`,
`ALTER PARTITION p1 OF INDEX t@i1 CONFIGURE ZONE USING gc.ttlseconds = 15210`,
`ALTER PARTITION p2 OF INDEX t@i2 CONFIGURE ZONE USING gc.ttlseconds = 15213`,
`ALTER TABLE t ALTER PRIMARY KEY USING COLUMNS (y)`,
} {
if _, err := sqlDB.Exec(stmt); err != nil {
t.Fatal(err)
}
}

// Get the zone config corresponding to the table.
Expand Down
9 changes: 5 additions & 4 deletions pkg/cmd/dev/build.go
Original file line number Diff line number Diff line change
Expand Up @@ -66,12 +66,9 @@ func makeBuildCmd(runE func(cmd *cobra.Command, args []string) error) *cobra.Com
}

// TODO(irfansharif): Add grouping shorthands like "all" or "bins", etc.
// TODO(irfansharif): Make sure all the relevant binary targets are defined
// above, and in usage docs.

// buildTargetMapping maintains shorthands that map 1:1 with bazel targets.
var buildTargetMapping = map[string]string{
"all_tests": "//pkg:all_tests",
"bazel-remote": bazelRemoteTarget,
"buildifier": "@com_github_bazelbuild_buildtools//buildifier:buildifier",
"buildozer": "@com_github_bazelbuild_buildtools//buildozer:buildozer",
Expand Down Expand Up @@ -337,7 +334,11 @@ func (d *dev) getBasicBuildArgs(
}

args = append(args, aliased)
buildTargets = append(buildTargets, buildTarget{fullName: aliased, kind: "go_binary"})
if aliased == "//pkg:all_tests" {
buildTargets = append(buildTargets, buildTarget{fullName: aliased, kind: "test_suite"})
} else {
buildTargets = append(buildTargets, buildTarget{fullName: aliased, kind: "go_binary"})
}
}

// Add --config=with_ui iff we're building a target that needs it.
Expand Down
8 changes: 8 additions & 0 deletions pkg/cmd/dev/testdata/datadriven/dev-build
Original file line number Diff line number Diff line change
Expand Up @@ -66,3 +66,11 @@ mkdir crdb-checkout/bin
bazel info bazel-bin --color=no
rm crdb-checkout/bin/stress
cp sandbox/external/com_github_cockroachdb_stress/stress_/stress crdb-checkout/bin/stress

exec
dev build tests
----
bazel build //pkg:all_tests
bazel info workspace --color=no
mkdir crdb-checkout/bin
bazel info bazel-bin --color=no
2 changes: 1 addition & 1 deletion pkg/spanconfig/spanconfigmanager/manager.go
Original file line number Diff line number Diff line change
Expand Up @@ -180,7 +180,7 @@ func (m *Manager) createAndStartJobIfNoneExists(ctx context.Context) (bool, erro
record := jobs.Record{
JobID: m.jr.MakeJobID(),
Description: "reconciling span configurations",
Username: security.RootUserName(),
Username: security.NodeUserName(),
Details: jobspb.AutoSpanConfigReconciliationDetails{},
Progress: jobspb.AutoSpanConfigReconciliationProgress{},
NonCancelable: true,
Expand Down
12 changes: 8 additions & 4 deletions pkg/sql/logictest/testdata/logic_test/jobs
Original file line number Diff line number Diff line change
Expand Up @@ -34,13 +34,17 @@ SCHEMA CHANGE GC GC for temporary index used during index backfill root
query TTT
SELECT job_type, description, user_name FROM crdb_internal.jobs WHERE user_name = 'root'
----
AUTO SPAN CONFIG RECONCILIATION reconciling span configurations root
SCHEMA CHANGE updating version for users table root
SCHEMA CHANGE updating version for role options table root
SCHEMA CHANGE updating privileges for database 104 root
SCHEMA CHANGE CREATE INDEX ON test.public.t (x) root
SCHEMA CHANGE GC GC for temporary index used during index backfill root

query TTT
SELECT job_type, description, user_name FROM crdb_internal.jobs WHERE user_name = 'node'
----
AUTO SPAN CONFIG RECONCILIATION reconciling span configurations node

user testuser

# a non-admin user cannot see the admin jobs
Expand Down Expand Up @@ -79,7 +83,7 @@ SCHEMA CHANGE GC GC for temporary index used during index backfill testuser
user root

query TTT
SELECT job_type, description, user_name FROM [SHOW JOBS] WHERE user_name IN ('root', 'testuser')
SELECT job_type, description, user_name FROM [SHOW JOBS] WHERE user_name IN ('root', 'testuser', 'node')
----
SCHEMA CHANGE updating version for users table root
SCHEMA CHANGE updating version for role options table root
Expand All @@ -90,9 +94,9 @@ SCHEMA CHANGE CREATE INDEX ON test.public.u (x) testuser
SCHEMA CHANGE GC GC for temporary index used during index backfill testuser

query TTT
SELECT job_type, description, user_name FROM crdb_internal.jobs WHERE user_name IN ('root', 'testuser')
SELECT job_type, description, user_name FROM crdb_internal.jobs WHERE user_name IN ('root', 'testuser', 'node')
----
AUTO SPAN CONFIG RECONCILIATION reconciling span configurations root
AUTO SPAN CONFIG RECONCILIATION reconciling span configurations node
SCHEMA CHANGE updating version for users table root
SCHEMA CHANGE updating version for role options table root
SCHEMA CHANGE updating privileges for database 104 root
Expand Down
24 changes: 20 additions & 4 deletions pkg/sql/resolver.go
Original file line number Diff line number Diff line change
Expand Up @@ -1008,13 +1008,29 @@ func (p *planner) getTableAndIndex(
}
optIdx := idx.(*optIndex)

// Resolve the object name for logging if
// its missing.
// Set the object name for logging if it's missing.
if tableWithIndex.Table.ObjectName == "" {
tableWithIndex.Table = tree.MakeTableNameFromPrefix(qualifiedName.ObjectNamePrefix, qualifiedName.ObjectName)
tableWithIndex.Table = tree.MakeTableNameFromPrefix(
qualifiedName.ObjectNamePrefix, qualifiedName.ObjectName,
)
}

return tabledesc.NewBuilder(optIdx.tab.desc.TableDesc()).BuildExistingMutableTable(), optIdx.idx, nil
// Use the descriptor collection to get a proper handle to the mutable
// descriptor for the relevant table and use that mutable object to
// get a handle to the corresponding index.
tableID := optIdx.tab.desc.GetID()
mut, err := p.Descriptors().GetMutableTableVersionByID(ctx, tableID, p.Txn())
if err != nil {
return nil, nil, errors.NewAssertionErrorWithWrappedErrf(err,
"failed to re-resolve table %d for index %s", tableID, tableWithIndex)
}
retIdx, err := mut.FindIndexWithID(optIdx.idx.GetID())
if err != nil {
return nil, nil, errors.NewAssertionErrorWithWrappedErrf(err,
"retrieving index %s (%d) from table which was known to already exist for table %d",
tableWithIndex, optIdx.idx.GetID(), tableID)
}
return mut, retIdx, nil
}

// expandTableGlob expands pattern into a list of objects represented
Expand Down

0 comments on commit b038da1

Please sign in to comment.