Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
63416: sql: emit point deletes during delete fastpath r=yuzefovich a=jordanlewis

Previously, the "deleteRange" SQL operator, which is meant to be a
fast-path for cases in which an entire range of keys can be deleted,
always did what it said: emitted DeleteRange KV operations. This
precludes a crucial optimization: sending point deletes when the list of
deleted keys is exactly known.

For example, a query like `DELETE FROM kv WHERE k = 10000` uses the
"fast path" delete, since it has a contiguous set of keys to delete, and
it doesn't need to know the values that were deleted. But, in this case,
the performance is actually worse if we use a DeleteRange KV operation
for various reasons (see #53939), because:

- ranged KV writes (DeleteRangeRequest) cannot be pipelined because an
  enumeration of the intents that they will leave cannot be known ahead
  of time. They must therefore perform evaluation and replication
  synchronously.
- ranged KV writes (DeleteRangeRequest) result in ranged intent
  resolution, which is less efficient (although this became less
  important since we re-enabled time-bound iterators).

The reason we couldn't previously emit point deletes in this case is
that SQL needs to know whether it deleted something or not. This means
we can't do a "blind put" of a deletion: we need to actually understand
whether there was something that we were "overwriting" with our delete.

This commit modifies the DeleteResponse to always return a boolean
indicating whether a key from the DeleteRequest was actually deleted.

Additionally, the deleteRange SQL operator detects when it can emit
single-key deletes, and does so.

Closes #53939.

Release note (performance improvement): point deletes in SQL are now
more efficient during concurrent workloads.

76233: kv: remove clock update on BatchResponse r=nvanbenschoten a=nvanbenschoten

Before this change, we were updating the local clock with each BatchResponse's WriteTimestamp. This was meant to handle cases where the batch request timestamp was forwarded during evaluation.

This was unnecessary for two reasons.

The first is that a BatchResponse can legitimately carry an operation timestamp that leads the local HLC clock on the leaseholder that evaluated the request. This has been true since #80706, which introduced the concept of a "local timestamp". This allowed us to remove the (broken) attempt at ensuring that the HLC on a leaseholder always leads the MVCC timestamp of all values in the leaseholder's keyspace (see the update to `pkg/kv/kvserver/uncertainty/doc.go` in that PR).

The second was that it was not even correct. The idea behind bumping the HLC on the response path was to ensure that if a batch request was forwarded to a newer timestamp during evaluation and then completed a write, that forwarded timestamp would be reflected in the leaseholder's HLC. However, this ignored the fact that any forwarded timestamp must have either come from an existing value in the range or from the leaseholder's clock. So if those didn't lead the clock, the derived timestamp wouldn't either. It also ignored the fact that the clock bump here was too late (post-latch release) and if it had actually been needed (it wasn't), it wouldn't have even ensured that the timestamp on any lease transfer led the maximum time of any response served by the outgoing leaseholder.

There are no mixed-version migration concerns of this change, because #80706 ensured that any future-time operation will still continue to use the synthetic bit until all nodes are running v22.2 or later.

85350: insights: ingester r=matthewtodd a=matthewtodd

Closes #81021.
    
Here we begin observing statements and transactions asynchronously, to
avoid slowing down the hot sql execution path as much as possible.
    
Release note: None

85440: colmem: improve memory-limiting behavior of the accounting helpers r=yuzefovich a=yuzefovich

**colmem: introduce a helper method when no memory limit should be applied**

This commit is a pure mechanical change.

Release note: None

**colmem: move some logic of capacity-limiting into the accounting helper**

This commit moves the logic that was duplicated across each user of the
SetAccountingHelper into the helper itself. Clearly, this allows us to
de-duplicate some code, but it'll make it easier to refactor the code
which is done in the following commit.

Additionally, this commit makes a tiny change to make the resetting
behavior in the hash aggregator more precise.

Release note: None

**colmem: improve memory-limiting behavior of the accounting helpers**

This commit fixes an oversight in how we are allocating batches of the
"dynamic" capacity. We have two related ways for reallocating batches,
and both of them work by growing the capacity of the batch until the
memory limit is exceeded, and then the batch would be reused until the
end of the query execution. This is a reasonable heuristic under the
assumption that all tuples in the data stream are roughly equal in size,
but this might not be the case.

In particular, consider an example when 10k small rows of 1KiB are
followed by 10k large rows of 1MiB. According to our heuristic, we
happily grow the batch until 1024 in capacity, and then we do not shrink
the capacity of that batch, so once the large rows start appearing, we
put 1GiB worth of data into a single batch, significantly exceeding our
memory limit (usually 64MiB with the default `workmem` setting).

This commit introduces a new heuristic as follows:
- the first time a batch exceeds the memory limit, its capacity is memorized,
  and from now on that capacity will determine the upper bound on the
  capacities of the batches allocated through the helper;
- if at any point in time a batch exceeds the memory limit by at least a
  factor of two, then that batch is discarded, and the capacity will never
  exceed half of the capacity of the discarded batch;
- if the memory limit is not reached, then the behavior of the dynamic growth
  of the capacity provided by `Allocator.ResetMaybeReallocate` is still
  applicable (i.e. the capacities will grow exponentially until
  coldata.BatchSize()).

Note that this heuristic does not have an ability to grow the maximum
capacity once it's been set although it might make sense to do so (say,
if after shrinking the capacity, the next five times we see that the
batch is using less than half of the memory limit). This is an conscious
omission since I want this change to be backported, and never growing
seems like a safer choice. Thus, this improvement is left as a TODO.

Also, we still might create batches that are too large in memory
footprint in those places that don't use the SetAccountingHelper (e.g.
in the columnarizer) since we perform the memory limit check at the
batch granularity. However, this commit improves things there so that we
don't reuse that batch on the next iteration and will use half of the
capacity on the next iteration.

Fixes: #76464.

Release note (bug fix): CockroachDB now more precisely respects the
`distsql_workmem` setting which improves the stability of each node and
makes OOMs less likely.

**colmem: unexport Allocator.ResetMaybeReallocate**

This commit is a mechanical change to unexport
`Allocator.ResetMaybeReallocate` so that the users would be forced to use
the method with the same name from the helpers. This required splitting
off the tests into two files.

Release note: None

85492: backupccl: remap all restored tables r=dt a=dt

This PR has a few changes, broken down into separate commits:
a) stop restoring tmp tables and remove the special-case code to synthesize their special schemas; These were previously restored only to be dropped so that restored jobs that referenced them would not be broken, but we stopped restoring jobs.
b) synthesize type-change jobs during cluster restore; this goes with not restoring jobs.
c) fix some assumptions in tests/other code about what IDs restored tables have.
d) finally, always assign new IDs to all restored objects, even during cluster restore, removing the need to carefully move conflicting tables or other things around.

Commit-by-commit review recommended.


85930: jobs: make expiration use intended txn priority r=ajwerner a=rafiss

In aed014f these operations were supposed to be changed to use
MinUserPriority. However, they weren't using the appropriate txn, so it
didn't have the intended effect.

Release note: None

Co-authored-by: Jordan Lewis <[email protected]>
Co-authored-by: Nathan VanBenschoten <[email protected]>
Co-authored-by: Matthew Todd <[email protected]>
Co-authored-by: Yahor Yuzefovich <[email protected]>
Co-authored-by: David Taylor <[email protected]>
Co-authored-by: Rafi Shamim <[email protected]>
  • Loading branch information
7 people committed Aug 11, 2022
7 parents 47dfddf + 00af7d7 + 35151c9 + 1533b8e + a4b1453 + b008287 + e04fb99 commit 8e3ee57
Show file tree
Hide file tree
Showing 138 changed files with 1,797 additions and 1,340 deletions.
2 changes: 1 addition & 1 deletion docs/generated/settings/settings-for-tenants.txt
Original file line number Diff line number Diff line change
Expand Up @@ -286,4 +286,4 @@ trace.jaeger.agent string the address of a Jaeger agent to receive traces using
trace.opentelemetry.collector string address of an OpenTelemetry trace collector to receive traces using the otel gRPC protocol, as <host>:<port>. If no port is specified, 4317 will be used.
trace.span_registry.enabled boolean true if set, ongoing traces can be seen at https://<ui>/#/debug/tracez
trace.zipkin.collector string the address of a Zipkin instance to receive traces, as <host>:<port>. If no port is specified, 9411 will be used.
version version 22.1-44 set the active cluster version in the format '<major>.<minor>'
version version 22.1-46 set the active cluster version in the format '<major>.<minor>'
2 changes: 1 addition & 1 deletion docs/generated/settings/settings.html
Original file line number Diff line number Diff line change
Expand Up @@ -217,6 +217,6 @@
<tr><td><code>trace.opentelemetry.collector</code></td><td>string</td><td><code></code></td><td>address of an OpenTelemetry trace collector to receive traces using the otel gRPC protocol, as <host>:<port>. If no port is specified, 4317 will be used.</td></tr>
<tr><td><code>trace.span_registry.enabled</code></td><td>boolean</td><td><code>true</code></td><td>if set, ongoing traces can be seen at https://<ui>/#/debug/tracez</td></tr>
<tr><td><code>trace.zipkin.collector</code></td><td>string</td><td><code></code></td><td>the address of a Zipkin instance to receive traces, as <host>:<port>. If no port is specified, 9411 will be used.</td></tr>
<tr><td><code>version</code></td><td>version</td><td><code>22.1-44</code></td><td>set the active cluster version in the format '<major>.<minor>'</td></tr>
<tr><td><code>version</code></td><td>version</td><td><code>22.1-46</code></td><td>set the active cluster version in the format '<major>.<minor>'</td></tr>
</tbody>
</table>
3 changes: 0 additions & 3 deletions pkg/ccl/backupccl/BUILD.bazel
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,6 @@ go_library(
"//pkg/sql/catalog",
"//pkg/sql/catalog/catalogkeys",
"//pkg/sql/catalog/catpb",
"//pkg/sql/catalog/catprivilege",
"//pkg/sql/catalog/colinfo",
"//pkg/sql/catalog/dbdesc",
"//pkg/sql/catalog/descbuilder",
Expand Down Expand Up @@ -115,7 +114,6 @@ go_library(
"//pkg/sql/syntheticprivilege",
"//pkg/sql/types",
"//pkg/storage",
"//pkg/upgrade/upgrades",
"//pkg/util",
"//pkg/util/admission/admissionpb",
"//pkg/util/contextutil",
Expand Down Expand Up @@ -229,7 +227,6 @@ go_test(
"//pkg/sql/catalog",
"//pkg/sql/catalog/bootstrap",
"//pkg/sql/catalog/catalogkeys",
"//pkg/sql/catalog/catprivilege",
"//pkg/sql/catalog/descpb",
"//pkg/sql/catalog/descs",
"//pkg/sql/catalog/desctestutils",
Expand Down
137 changes: 0 additions & 137 deletions pkg/ccl/backupccl/backup_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -8318,143 +8318,6 @@ func flipBitInManifests(t *testing.T, rawDir string) {
}
}

func TestFullClusterTemporaryBackupAndRestore(t *testing.T) {
defer leaktest.AfterTest(t)()
defer log.Scope(t).Close(t)

skip.UnderRace(t, "times out under race cause it starts up two test servers")

numNodes := 4
// Start a new server that shares the data directory.
settings := cluster.MakeTestingClusterSettings()
sql.TempObjectWaitInterval.Override(context.Background(), &settings.SV, time.Microsecond*0)
dir, dirCleanupFn := testutils.TempDir(t)
defer dirCleanupFn()
params := base.TestClusterArgs{}
params.ServerArgs.ExternalIODir = dir
// This test fails when run within a tenant. Tracked with #76378.
params.ServerArgs.DisableDefaultTestTenant = true
params.ServerArgs.UseDatabase = "defaultdb"
params.ServerArgs.Settings = settings
knobs := base.TestingKnobs{
SQLExecutor: &sql.ExecutorTestingKnobs{
DisableTempObjectsCleanupOnSessionExit: true,
},
}
params.ServerArgs.Knobs = knobs
tc := serverutils.StartNewTestCluster(
t, numNodes, params,
)
defer tc.Stopper().Stop(context.Background())

// Start two temporary schemas and create a table in each. This table will
// have different pg_temp schemas but will be created in the same defaultdb.
comment := "never see this"
for _, connID := range []int{0, 1} {
conn := tc.ServerConn(connID)
sqlDB := sqlutils.MakeSQLRunner(conn)
sqlDB.Exec(t, `SET experimental_enable_temp_tables=true`)
sqlDB.Exec(t, `CREATE TEMP TABLE t (x INT)`)
sqlDB.Exec(t, fmt.Sprintf(`COMMENT ON TABLE t IS '%s'`, comment))
require.NoError(t, conn.Close())
}

// Create a third session where we have two temp tables which will be in the
// same pg_temp schema with the same name but in different DBs.
diffDBConn := tc.ServerConn(2)
diffDB := sqlutils.MakeSQLRunner(diffDBConn)
diffDB.Exec(t, `SET experimental_enable_temp_tables=true`)
diffDB.Exec(t, `CREATE DATABASE d1`)
diffDB.Exec(t, `USE d1`)
diffDB.Exec(t, `CREATE TEMP TABLE t (x INT)`)
diffDB.Exec(t, `CREATE DATABASE d2`)
diffDB.Exec(t, `USE d2`)
diffDB.Exec(t, `CREATE TEMP TABLE t (x INT)`)
require.NoError(t, diffDBConn.Close())

backupDBConn := tc.ServerConn(3)
backupDB := sqlutils.MakeSQLRunner(backupDBConn)
backupDB.Exec(t, `BACKUP TO 'nodelocal://0/full_cluster_backup'`)
require.NoError(t, backupDBConn.Close())

params = base.TestClusterArgs{}
ch := make(chan time.Time)
finishedCh := make(chan struct{})
knobs = base.TestingKnobs{
SQLExecutor: &sql.ExecutorTestingKnobs{
OnTempObjectsCleanupDone: func() {
finishedCh <- struct{}{}
},
TempObjectsCleanupCh: ch,
},
}
params.ServerArgs.Knobs = knobs
params.ServerArgs.Settings = settings
_, sqlDBRestore, cleanupRestore := backupRestoreTestSetupEmpty(t, singleNode, dir, InitManualReplication,
params)
defer cleanupRestore()
sqlDBRestore.Exec(t, `RESTORE FROM 'nodelocal://0/full_cluster_backup'`)

// Before the reconciliation job runs we should be able to see the following:
// - 2 synthesized pg_temp sessions in defaultdb and 1 each in db1 and db2.
// We synthesize a new temp schema for each unique backed-up schemaID
// of a temporary table descriptor.
// - All temp tables remapped to belong to the associated synthesized temp
// schema in the original db.
checkSchemasQuery := `SELECT count(*) FROM [SHOW SCHEMAS] WHERE schema_name LIKE 'pg_temp_%'`
sqlDBRestore.CheckQueryResults(t, checkSchemasQuery, [][]string{{"2"}})

checkTempTablesQuery := `SELECT table_name FROM [SHOW TABLES] ORDER BY table_name`
sqlDBRestore.CheckQueryResults(t, checkTempTablesQuery, [][]string{{"t"}, {"t"}})

// Sanity check that the databases the temporary tables originally belonged to
// are restored.
sqlDBRestore.CheckQueryResults(t,
`SELECT database_name FROM [SHOW DATABASES] ORDER BY database_name`,
[][]string{{"d1"}, {"d2"}, {"defaultdb"}, {"postgres"}, {"system"}})

// Check that we can see the comment on the temporary tables before the
// reconciliation job runs.
checkCommentQuery := fmt.Sprintf(`SELECT count(comment) FROM system.comments WHERE comment='%s'`,
comment)
var commentCount int
sqlDBRestore.QueryRow(t, checkCommentQuery).Scan(&commentCount)
require.Equal(t, commentCount, 2)

// Check that show tables in one of the restored DBs returns the temporary
// table.
sqlDBRestore.Exec(t, "USE d1")
sqlDBRestore.CheckQueryResults(t, checkTempTablesQuery, [][]string{
{"t"},
})
sqlDBRestore.CheckQueryResults(t, checkSchemasQuery, [][]string{{"1"}})

sqlDBRestore.Exec(t, "USE d2")
sqlDBRestore.CheckQueryResults(t, checkTempTablesQuery, [][]string{
{"t"},
})
sqlDBRestore.CheckQueryResults(t, checkSchemasQuery, [][]string{{"1"}})

testutils.SucceedsSoon(t, func() error {
ch <- timeutil.Now()
<-finishedCh

for _, database := range []string{"defaultdb", "d1", "d2"} {
sqlDBRestore.Exec(t, fmt.Sprintf("USE %s", database))
// Check that all the synthesized temp schemas have been wiped.
sqlDBRestore.CheckQueryResults(t, checkSchemasQuery, [][]string{{"0"}})

// Check that all the temp tables have been wiped.
sqlDBRestore.CheckQueryResults(t, checkTempTablesQuery, [][]string{})

// Check that all the temp table comments have been wiped.
sqlDBRestore.QueryRow(t, checkCommentQuery).Scan(&commentCount)
require.Equal(t, commentCount, 0)
}
return nil
})
}

func TestRestoreJobEventLogging(t *testing.T) {
defer leaktest.AfterTest(t)()
defer log.ScopeWithoutShowLogs(t).Close(t)
Expand Down
114 changes: 27 additions & 87 deletions pkg/ccl/backupccl/datadriven_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,6 @@ import (
"github.com/cockroachdb/cockroach/pkg/settings/cluster"
"github.com/cockroachdb/cockroach/pkg/sql"
"github.com/cockroachdb/cockroach/pkg/sql/catalog/catalogkeys"
"github.com/cockroachdb/cockroach/pkg/sql/catalog/catprivilege"
"github.com/cockroachdb/cockroach/pkg/sql/catalog/systemschema"
"github.com/cockroachdb/cockroach/pkg/sql/catalog/tabledesc"
"github.com/cockroachdb/cockroach/pkg/testutils"
Expand All @@ -37,7 +36,6 @@ import (
"github.com/cockroachdb/cockroach/pkg/testutils/sqlutils"
"github.com/cockroachdb/cockroach/pkg/util/leaktest"
"github.com/cockroachdb/cockroach/pkg/util/log"
"github.com/cockroachdb/cockroach/pkg/util/timeutil"
"github.com/cockroachdb/datadriven"
"github.com/cockroachdb/errors"
"github.com/lib/pq"
Expand Down Expand Up @@ -77,28 +75,24 @@ type sqlDBKey struct {
}

type datadrivenTestState struct {
servers map[string]serverutils.TestServerInterface
// tempObjectCleanupAndWait is a mapping from server name to a method that can
// be used to nudge and wait for temporary object cleanup.
tempObjectCleanupAndWait map[string]func()
dataDirs map[string]string
sqlDBs map[sqlDBKey]*gosql.DB
jobTags map[string]jobspb.JobID
clusterTimestamps map[string]string
noticeBuffer []string
cleanupFns []func()
vars map[string]string
servers map[string]serverutils.TestServerInterface
dataDirs map[string]string
sqlDBs map[sqlDBKey]*gosql.DB
jobTags map[string]jobspb.JobID
clusterTimestamps map[string]string
noticeBuffer []string
cleanupFns []func()
vars map[string]string
}

func newDatadrivenTestState() datadrivenTestState {
return datadrivenTestState{
servers: make(map[string]serverutils.TestServerInterface),
tempObjectCleanupAndWait: make(map[string]func()),
dataDirs: make(map[string]string),
sqlDBs: make(map[sqlDBKey]*gosql.DB),
jobTags: make(map[string]jobspb.JobID),
clusterTimestamps: make(map[string]string),
vars: make(map[string]string),
servers: make(map[string]serverutils.TestServerInterface),
dataDirs: make(map[string]string),
sqlDBs: make(map[sqlDBKey]*gosql.DB),
jobTags: make(map[string]jobspb.JobID),
clusterTimestamps: make(map[string]string),
vars: make(map[string]string),
}
}

Expand All @@ -116,18 +110,12 @@ func (d *datadrivenTestState) cleanup(ctx context.Context) {
}

type serverCfg struct {
name string
iodir string
// nudgeTempObjectsCleanup is a channel used to nudge the temporary object
// reconciliation job to run.
nudgeTempObjectsCleanup chan time.Time
// tempObjectCleanupDone is the channel used by the temporary object
// reconciliation job to signal it is done cleaning up.
tempObjectCleanupDone chan struct{}
nodes int
splits int
ioConf base.ExternalIODirConfig
localities string
name string
iodir string
nodes int
splits int
ioConf base.ExternalIODirConfig
localities string
}

func (d *datadrivenTestState) addServer(t *testing.T, cfg serverCfg) error {
Expand All @@ -139,17 +127,6 @@ func (d *datadrivenTestState) addServer(t *testing.T, cfg serverCfg) error {
JobsTestingKnobs: jobs.NewTestingKnobsWithShortIntervals(),
}

// If the server needs to control temporary object cleanup, let us set that up
// now.
if cfg.nudgeTempObjectsCleanup != nil && cfg.tempObjectCleanupDone != nil {
params.ServerArgs.Knobs.SQLExecutor = &sql.ExecutorTestingKnobs{
OnTempObjectsCleanupDone: func() {
cfg.tempObjectCleanupDone <- struct{}{}
},
TempObjectsCleanupCh: cfg.nudgeTempObjectsCleanup,
}
}

settings := cluster.MakeTestingClusterSettings()
closedts.TargetDuration.Override(context.Background(), &settings.SV, 10*time.Millisecond)
closedts.SideTransportCloseInterval.Override(context.Background(), &settings.SV, 10*time.Millisecond)
Expand Down Expand Up @@ -177,25 +154,12 @@ func (d *datadrivenTestState) addServer(t *testing.T, cfg serverCfg) error {
InitManualReplication, params)
}
cleanupFn := func() {
if cfg.nudgeTempObjectsCleanup != nil {
close(cfg.nudgeTempObjectsCleanup)
}
if cfg.tempObjectCleanupDone != nil {
close(cfg.tempObjectCleanupDone)
}
cleanup()
}
d.servers[cfg.name] = tc.Server(0)
d.dataDirs[cfg.name] = cfg.iodir
d.cleanupFns = append(d.cleanupFns, cleanupFn)

if cfg.nudgeTempObjectsCleanup != nil && cfg.tempObjectCleanupDone != nil {
d.tempObjectCleanupAndWait[cfg.name] = func() {
cfg.nudgeTempObjectsCleanup <- timeutil.Now()
<-cfg.tempObjectCleanupDone
}
}

return nil
}

Expand Down Expand Up @@ -261,10 +225,6 @@ func (d *datadrivenTestState) getSQLDB(t *testing.T, server string, user string)
//
// + splits: specifies the number of ranges the bank table is split into.
//
// + control-temp-object-cleanup: sets up the server in a way that the test
// can control when to run the temporary object reconciliation loop using
// nudge-and-wait-for-temp-cleanup
//
// - "exec-sql [server=<name>] [user=<name>] [args]"
// Executes the input SQL query on the target server. By default, server is
// the last created server.
Expand Down Expand Up @@ -350,8 +310,6 @@ func (d *datadrivenTestState) getSQLDB(t *testing.T, server string, user string)
//
// + target: SQL target. Currently, only table names are supported.
//
// - "nudge-and-wait-for-temp-cleanup"
// Nudges the temporary object reconciliation loop to run, and waits for completion.
func TestDataDriven(t *testing.T) {
defer leaktest.AfterTest(t)()
defer log.Scope(t).Close(t)
Expand Down Expand Up @@ -382,8 +340,6 @@ func TestDataDriven(t *testing.T) {
case "new-server":
var name, shareDirWith, iodir, localities string
var splits int
var nudgeTempObjectCleanup chan time.Time
var tempObjectCleanupDone chan struct{}
nodes := singleNode
var io base.ExternalIODirConfig
d.ScanArgs(t, "name", &name)
Expand All @@ -408,21 +364,15 @@ func TestDataDriven(t *testing.T) {
if d.HasArg("splits") {
d.ScanArgs(t, "splits", &splits)
}
if d.HasArg("control-temp-object-cleanup") {
nudgeTempObjectCleanup = make(chan time.Time)
tempObjectCleanupDone = make(chan struct{})
}

lastCreatedServer = name
cfg := serverCfg{
name: name,
iodir: iodir,
nudgeTempObjectsCleanup: nudgeTempObjectCleanup,
tempObjectCleanupDone: tempObjectCleanupDone,
nodes: nodes,
splits: splits,
ioConf: io,
localities: localities,
name: name,
iodir: iodir,
nodes: nodes,
splits: splits,
ioConf: io,
localities: localities,
}
err := ds.addServer(t, cfg)
if err != nil {
Expand Down Expand Up @@ -753,15 +703,6 @@ func TestDataDriven(t *testing.T) {
ds.clusterTimestamps[timestampTag] = ts
return ""

case "nudge-and-wait-for-temp-cleanup":
server := lastCreatedServer
if nudgeAndWait, ok := ds.tempObjectCleanupAndWait[server]; !ok {
t.Fatalf("server %s was not configured with `control-temp-object-cleanup`", server)
} else {
nudgeAndWait()
}
return ""

case "create-dummy-system-table":
db := ds.servers[lastCreatedServer].DB()
codec := ds.servers[lastCreatedServer].ExecutorConfig().(sql.ExecutorConfig).Codec
Expand All @@ -774,8 +715,7 @@ func TestDataDriven(t *testing.T) {
}
mut := dummyTable.NewBuilder().BuildCreatedMutable().(*tabledesc.Mutable)
mut.ID = id
mut.Name = fmt.Sprintf("%s_%d",
catprivilege.RestoreCopySystemTablePrefix, id)
mut.Name = fmt.Sprintf("%s_%d", "crdb_internal_copy", id)
tKey := catalogkeys.EncodeNameKey(codec, mut)
b := txn.NewBatch()
b.CPut(tKey, mut.GetID(), nil)
Expand Down
Loading

0 comments on commit 8e3ee57

Please sign in to comment.