Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
101260: sql: replicating JSON empty array ordering found in Postgres r=mgartner a=Shivs11

Currently, #97928 and #99275 are responsible for laying out a
lexicographical ordering for JSON columns to be forward indexable in
nature. This ordering is based on the rules posted by Postgres and is
in #99849.

However, Postgres currently sorts the empty JSON array before any other
JSON values. A Postgres bug report for this has been opened:
https://www.postgresql.org/message-id/17873-826fdc8bbcace4f1%40postgresql.org

This PR intends on replicating the Postgres behavior.

Fixes #105668

Epic: CRDB-24501

Release note: None


108160: roachtest/awsdms: run once a week instead r=Jeremyyang920 a=otan

Save a bit of mad dosh by running awsdms once a weekly instead of daily. We don't need this tested every week.

Epic: None

Release note: None

108300: schemachanger: Unskip some backup tests r=Xiang-Gu a=Xiang-Gu

Randomly skip subtests in the BACKUP/RESTORE suites before parallelizing them.

Epic: None
Release note: None

108328: rowexec: fix TestUncertaintyErrorIsReturned under race r=yuzefovich a=yuzefovich

We just saw a case when `TestUncertaintyErrorIsReturned` failed under race because we got a different DistSQL plan. This seems plausible in case the range cache population (which the test does explicitly) isn't quick enough for some reason, so this commit allows for the DistSQL plan to match the expectation via `SucceedsSoon` (if we happen to get a bad plan, then the following query execution should have the up-to-date range cache).

Fixes: #108250.

Release note: None

108341: importer: fix stale comment on mysqlStrToDatum r=mgartner,DrewKimball a=otan

Release note: None
Epic: None

From #108286 (review)

108370: go.mod: bump Pebble to fffe02a195e3 r=RahulAggarwal1016 a=RahulAggarwal1016

fffe02a1 db: simplify ScanInternal()
df7e2ae1 vfs: deflake TestDiskHealthChecking_Filesystem ff5c929a Rate Limit Scan Statistics
af8c5f27 internal/cache: mark panic messages as redaction-safe

Epic: none
Release note: none

108379: changefeedccl: deflake TestChangefeedSchemaChangeBackfillCheckpoint r=miretskiy a=jayshrivastava

Previously, the test `TestChangefeedSchemaChangeBackfillCheckpoint` would fail because it would read a table span too early. A schema change using the delcarative schema changer will update a table span to point to a new set of ranges. Previously, this test would use the span from before the schema change, which is incorrect. This change makes it use the span from after the schema change.

I stress tested this 30k times under the new schema changer and 10k times under the legacy schema changer to ensure the test is not flaky anymore.

Fixes: #108084
Release note: None
Epic: None

Co-authored-by: Shivam Saraf <[email protected]>
Co-authored-by: Oliver Tan <[email protected]>
Co-authored-by: Xiang Gu <[email protected]>
Co-authored-by: Yahor Yuzefovich <[email protected]>
Co-authored-by: Rahul Aggarwal <[email protected]>
Co-authored-by: Jayant Shrivastava <[email protected]>
  • Loading branch information
7 people committed Aug 8, 2023
8 parents 0d110cd + 056c300 + b5e2041 + f397c13 + daab511 + eed5695 + eda4a6a + c44ffa8 commit 69bc4c6
Show file tree
Hide file tree
Showing 36 changed files with 248 additions and 122 deletions.
6 changes: 3 additions & 3 deletions DEPS.bzl
Original file line number Diff line number Diff line change
Expand Up @@ -1595,10 +1595,10 @@ def go_deps():
patches = [
"@com_github_cockroachdb_cockroach//build/patches:com_github_cockroachdb_pebble.patch",
],
sha256 = "f0319e618ed024b7d708e2c1f8cf0d4b9b7e4112943288ba6036e893a7f8c151",
strip_prefix = "github.com/cockroachdb/[email protected]20230807145728-40d3f411e45b",
sha256 = "0866be1de9e4ba30d2b03d1300ca796e88260e620671ab6099850dd576e074a8",
strip_prefix = "github.com/cockroachdb/[email protected]20230808154433-fffe02a195e3",
urls = [
"https://storage.googleapis.com/cockroach-godeps/gomod/github.com/cockroachdb/pebble/com_github_cockroachdb_pebble-v0.0.0-20230807145728-40d3f411e45b.zip",
"https://storage.googleapis.com/cockroach-godeps/gomod/github.com/cockroachdb/pebble/com_github_cockroachdb_pebble-v0.0.0-20230808154433-fffe02a195e3.zip",
],
)
go_repository(
Expand Down
2 changes: 1 addition & 1 deletion build/bazelutil/distdir_files.bzl
Original file line number Diff line number Diff line change
Expand Up @@ -320,7 +320,7 @@ DISTDIR_FILES = {
"https://storage.googleapis.com/cockroach-godeps/gomod/github.com/cockroachdb/go-test-teamcity/com_github_cockroachdb_go_test_teamcity-v0.0.0-20191211140407-cff980ad0a55.zip": "bac30148e525b79d004da84d16453ddd2d5cd20528e9187f1d7dac708335674b",
"https://storage.googleapis.com/cockroach-godeps/gomod/github.com/cockroachdb/gostdlib/com_github_cockroachdb_gostdlib-v1.19.0.zip": "c4d516bcfe8c07b6fc09b8a9a07a95065b36c2855627cb3514e40c98f872b69e",
"https://storage.googleapis.com/cockroach-godeps/gomod/github.com/cockroachdb/logtags/com_github_cockroachdb_logtags-v0.0.0-20230118201751-21c54148d20b.zip": "ca7776f47e5fecb4c495490a679036bfc29d95bd7625290cfdb9abb0baf97476",
"https://storage.googleapis.com/cockroach-godeps/gomod/github.com/cockroachdb/pebble/com_github_cockroachdb_pebble-v0.0.0-20230807145728-40d3f411e45b.zip": "f0319e618ed024b7d708e2c1f8cf0d4b9b7e4112943288ba6036e893a7f8c151",
"https://storage.googleapis.com/cockroach-godeps/gomod/github.com/cockroachdb/pebble/com_github_cockroachdb_pebble-v0.0.0-20230808154433-fffe02a195e3.zip": "0866be1de9e4ba30d2b03d1300ca796e88260e620671ab6099850dd576e074a8",
"https://storage.googleapis.com/cockroach-godeps/gomod/github.com/cockroachdb/redact/com_github_cockroachdb_redact-v1.1.5.zip": "11b30528eb0dafc8bc1a5ba39d81277c257cbe6946a7564402f588357c164560",
"https://storage.googleapis.com/cockroach-godeps/gomod/github.com/cockroachdb/returncheck/com_github_cockroachdb_returncheck-v0.0.0-20200612231554-92cdbca611dd.zip": "ce92ba4352deec995b1f2eecf16eba7f5d51f5aa245a1c362dfe24c83d31f82b",
"https://storage.googleapis.com/cockroach-godeps/gomod/github.com/cockroachdb/sentry-go/com_github_cockroachdb_sentry_go-v0.6.1-cockroachdb.2.zip": "fbb2207d02aecfdd411b1357efe1192dbb827959e36b7cab7491731ac55935c9",
Expand Down
10 changes: 9 additions & 1 deletion docs/tech-notes/jsonb_forward_indexing.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,10 +44,18 @@ The following rules were kept in mind while designing this form of encoding, as
5. Objects with an equal number of key value pairs are compared in the order:
`key1`, `value1`, `key2`, `value2`, ….

**NOTE:** There is one exception to these rules, which is neither documented by
Postgres, nor mentioned in the source code: empty arrays are the minimum JSON
value. As far as we can tell, this is a Postgres bug that has existed for some
time. We've decided to replicate this behavior to remain consistent with
Postgres. We've filed a [Postgres bug report](https://www.postgresql.org/message-id/17873-826fdc8bbcace4f1%40postgresql.org)
to track the issue.

In order to satisfy property 1 at all times, tags are defined in an increasing order of bytes.
These tags will also have to be defined in a way where the tag representing an object is a large byte representation
for a hexadecimal value (such as 0xff) and the subsequent objects have a value 1 less than the previous one,
where the ordering is described in point 1 above.
where the ordering is described in point 1 above. There is a special tag for empty JSON arrays
in order to handle the special case of empty arrays being ordered before all other JSON values.

Additionally, tags representing terminators will also be defined. There will be two terminators, one for the ascending designation and the other for the descending one, and will be required to denote the end of a key encoding of the following JSON values: Objects, Arrays, Number and Strings. JSON Boolean and JSON Null are not required to have the terminator since they do not have variable length encoding due to the presence of a single tag (as explained later in this document).

Expand Down
2 changes: 1 addition & 1 deletion go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ require (
github.com/cockroachdb/go-test-teamcity v0.0.0-20191211140407-cff980ad0a55
github.com/cockroachdb/gostdlib v1.19.0
github.com/cockroachdb/logtags v0.0.0-20230118201751-21c54148d20b
github.com/cockroachdb/pebble v0.0.0-20230807145728-40d3f411e45b
github.com/cockroachdb/pebble v0.0.0-20230808154433-fffe02a195e3
github.com/cockroachdb/redact v1.1.5
github.com/cockroachdb/returncheck v0.0.0-20200612231554-92cdbca611dd
github.com/cockroachdb/stress v0.0.0-20220803192808-1806698b1b7b
Expand Down
4 changes: 2 additions & 2 deletions go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -493,8 +493,8 @@ github.com/cockroachdb/gostdlib v1.19.0/go.mod h1:+dqqpARXbE/gRDEhCak6dm0l14AaTy
github.com/cockroachdb/logtags v0.0.0-20211118104740-dabe8e521a4f/go.mod h1:Vz9DsVWQQhf3vs21MhPMZpMGSht7O/2vFW2xusFUVOs=
github.com/cockroachdb/logtags v0.0.0-20230118201751-21c54148d20b h1:r6VH0faHjZeQy818SGhaone5OnYfxFR/+AzdY3sf5aE=
github.com/cockroachdb/logtags v0.0.0-20230118201751-21c54148d20b/go.mod h1:Vz9DsVWQQhf3vs21MhPMZpMGSht7O/2vFW2xusFUVOs=
github.com/cockroachdb/pebble v0.0.0-20230807145728-40d3f411e45b h1:ymYyDZy5WYRTBqPVYgy0XUW+gHx2HeRkyt1FJmNJVOo=
github.com/cockroachdb/pebble v0.0.0-20230807145728-40d3f411e45b/go.mod h1:FN5O47SBEz5+kO9fG8UTR64g2WS1u5ZFCgTvxGjoSks=
github.com/cockroachdb/pebble v0.0.0-20230808154433-fffe02a195e3 h1:LUfRb+Ibf/OrSFHSyjls7neeWBAIsK4d/SWkv7z1nLw=
github.com/cockroachdb/pebble v0.0.0-20230808154433-fffe02a195e3/go.mod h1:FN5O47SBEz5+kO9fG8UTR64g2WS1u5ZFCgTvxGjoSks=
github.com/cockroachdb/redact v1.1.3/go.mod h1:BVNblN9mBWFyMyqK1k3AAiSxhvhfK2oOZZ2lK+dpvRg=
github.com/cockroachdb/redact v1.1.5 h1:u1PMllDkdFfPWaNGMyLD1+so+aq3uUItthCFqzwPJ30=
github.com/cockroachdb/redact v1.1.5/go.mod h1:BVNblN9mBWFyMyqK1k3AAiSxhvhfK2oOZZ2lK+dpvRg=
Expand Down
53 changes: 34 additions & 19 deletions pkg/ccl/changefeedccl/changefeed_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -1949,15 +1949,17 @@ func TestChangefeedSchemaChangeBackfillCheckpoint(t *testing.T) {
changefeedbase.FrontierCheckpointMaxBytes.Override(
context.Background(), &s.Server.ClusterSettings().SV, maxCheckpointSize)

// Note the tableSpan to avoid resolved events that leave no gaps
fooDesc := desctestutils.TestingGetPublicTableDescriptor(
s.SystemServer.DB(), s.Codec, "d", "foo")
tableSpan := fooDesc.PrimaryIndexSpan(s.Codec)
var tableSpan roachpb.Span
refreshTableSpan := func() {
fooDesc := desctestutils.TestingGetPublicTableDescriptor(
s.SystemServer.DB(), s.Codec, "d", "foo")
tableSpan = fooDesc.PrimaryIndexSpan(s.Codec)
}

// FilterSpanWithMutation should ensure that once the backfill begins, the following resolved events
// that are for that backfill (are of the timestamp right after the backfill timestamp) resolve some
// but not all of the time, which results in a checkpoint eventually being created
haveGaps := false
numGaps := 0
var backfillTimestamp hlc.Timestamp
var initialCheckpoint roachpb.SpanGroup
var foundCheckpoint int32
Expand All @@ -1971,6 +1973,11 @@ func TestChangefeedSchemaChangeBackfillCheckpoint(t *testing.T) {
// timestamp such that all backfill spans have a timestamp of
// timestamp.Next().
if r.BoundaryType == expectedBoundaryType {
// NB: We wait until the schema change is public before looking
// up the table span. When using the declarative schema changer,
// the table span will be different before and after the schema
// change due to a primary index swap.
refreshTableSpan()
backfillTimestamp = r.Timestamp
return false, nil
}
Expand All @@ -1993,11 +2000,18 @@ func TestChangefeedSchemaChangeBackfillCheckpoint(t *testing.T) {
return !(backfillTimestamp.IsEmpty() || r.Timestamp.LessEq(backfillTimestamp.Next())), nil
}

// Only allow resolving if we definitely won't have a completely resolved table
if !r.Span.Equal(tableSpan) && haveGaps {
// At the end of a backfill, kv feed will emit a resolved span for the whole table.
// Filter this out because we would like to leave gaps.
if r.Span.Equal(tableSpan) {
return true, nil
}

// Ensure that we have at least 2 gaps, so when a second checkpoint happens later in this test,
// the second checkpoint can still leave at least one gap.
if numGaps >= 2 {
return rnd.Intn(10) > 7, nil
}
haveGaps = true
numGaps += 1
return true, nil
}

Expand Down Expand Up @@ -2026,7 +2040,7 @@ func TestChangefeedSchemaChangeBackfillCheckpoint(t *testing.T) {
// as well as the newly resolved ones
var secondCheckpoint roachpb.SpanGroup
foundCheckpoint = 0
haveGaps = false
numGaps = 0
knobs.FilterSpanWithMutation = func(r *jobspb.ResolvedSpan) (bool, error) {
// Stop resolving anything after second checkpoint set to avoid backfill completion
if secondCheckpoint.Len() > 0 {
Expand Down Expand Up @@ -2054,11 +2068,17 @@ func TestChangefeedSchemaChangeBackfillCheckpoint(t *testing.T) {

require.Falsef(t, initialCheckpoint.Encloses(r.Span), "second backfill should not resolve checkpointed span")

// Only allow resolving if we definitely won't have a completely resolved table
if !r.Span.Equal(tableSpan) && haveGaps {
// At the end of a backfill, kv feed will emit a resolved span for the whole table.
// Filter this out because we would like to leave at least one gap.
if r.Span.Equal(tableSpan) {
return true, nil
}

// Ensure there is at least one gap so that we can receive resolved spans later.
if numGaps >= 1 {
return rnd.Intn(10) > 7, nil
}
haveGaps = true
numGaps += 1
return true, nil
}

Expand Down Expand Up @@ -2097,15 +2117,10 @@ func TestChangefeedSchemaChangeBackfillCheckpoint(t *testing.T) {
// Pause job to avoid race on the resolved array
require.NoError(t, jobFeed.Pause())

// NB: With the declarative schema changer, there is a primary index swap,
// so the primary index span will change.
freshFooDesc := desctestutils.TestingGetPublicTableDescriptor(
s.SystemServer.DB(), s.Codec, "d", "foo")
tableSpanAfter := freshFooDesc.PrimaryIndexSpan(s.Codec)

// Verify that none of the resolved spans after resume were checkpointed.
t.Logf("Table Span: %s, Second Checkpoint: %v, Resolved Spans: %v", tableSpan, secondCheckpoint, resolved)
for _, sp := range resolved {
require.Falsef(t, !sp.Equal(tableSpanAfter) && secondCheckpoint.Contains(sp.Key), "span should not have been resolved: %s", sp)
require.Falsef(t, !sp.Equal(tableSpan) && secondCheckpoint.Contains(sp.Key), "span should not have been resolved: %s", sp)
}
}

Expand Down
2 changes: 1 addition & 1 deletion pkg/cmd/roachtest/tests/awsdms.go
Original file line number Diff line number Diff line change
Expand Up @@ -192,7 +192,7 @@ func registerAWSDMS(r registry.Registry) {
Owner: registry.OwnerMigrations,
Cluster: r.MakeClusterSpec(1),
Leases: registry.MetamorphicLeases,
Tags: registry.Tags(`default`, `awsdms`, `aws`),
Tags: registry.Tags(`weekly`, `aws-weekly`),
Run: runAWSDMS,
})
}
Expand Down
7 changes: 4 additions & 3 deletions pkg/sql/importer/read_import_mysql.go
Original file line number Diff line number Diff line change
Expand Up @@ -209,9 +209,10 @@ const (
func mysqlStrToDatum(evalCtx *eval.Context, s string, desired *types.T) (tree.Datum, error) {
switch desired.Family() {
case types.BytesFamily:
// mysql emits raw byte strings that do not use the same escaping as our ParseBytes
// function expects, and the difference between ParseStringAs and
// ParseDatumStringAs is whether or not it attempts to parse bytes.
// mysql emits raw byte strings that do not use the same escaping as our
// tree.ParseDBytes function expects, and the difference between
// tree.ParseAndRequireString and mysqlStrToDatum is whether or not it
// attempts to parse bytes.
return tree.NewDBytes(tree.DBytes(s)), nil
default:
res, _, err := tree.ParseAndRequireString(desired, s, evalCtx)
Expand Down
22 changes: 11 additions & 11 deletions pkg/sql/logictest/testdata/logic_test/json_index
Original file line number Diff line number Diff line change
Expand Up @@ -20,13 +20,13 @@ INSERT INTO t VALUES
query T
SELECT x FROM t ORDER BY x
----
[]
"a"
"aa"
"abcdefghi"
"b"
1
100
[]
{"a": "b"}


Expand All @@ -38,13 +38,13 @@ INSERT INTO t VALUES
query T
SELECT x FROM t@t_pkey ORDER BY x
----
[]
"a"
"aa"
"abcdefghi"
"b"
1
100
[]
{"a": "b"}

# Use the index for point lookups.
Expand Down Expand Up @@ -77,12 +77,12 @@ query T
SELECT x FROM t@t_pkey WHERE x > '1' ORDER BY x
----
100
[]
{"a": "b"}

query T
SELECT x FROM t@t_pkey WHERE x < '1' ORDER BY x
----
[]
"a"
"aa"
"abcdefghi"
Expand All @@ -92,12 +92,12 @@ SELECT x FROM t@t_pkey WHERE x < '1' ORDER BY x
query T
SELECT x FROM t@t_pkey WHERE x > '1' OR x < '1' ORDER BY x
----
[]
"a"
"aa"
"abcdefghi"
"b"
100
[]
{"a": "b"}

query T
Expand All @@ -109,12 +109,12 @@ query T
SELECT x FROM t@t_pkey WHERE x > '1' OR x < '1' ORDER BY x DESC
----
{"a": "b"}
[]
100
"b"
"abcdefghi"
"aa"
"a"
[]

# Adding more primitive JSON values.
statement ok
Expand All @@ -129,6 +129,7 @@ INSERT INTO t VALUES
query T
SELECT x FROM t@t_pkey ORDER BY x
----
[]
null
"Testing Punctuation?!."
"a"
Expand All @@ -141,18 +142,17 @@ null
100
false
true
[]
{"a": "b"}

query T
SELECT x FROM t@t_pkey WHERE x > 'true' ORDER BY x
----
[]
{"a": "b"}

query T
SELECT x FROM t@t_pkey WHERE x < 'false' ORDER BY x
----
[]
null
"Testing Punctuation?!."
"a"
Expand Down Expand Up @@ -330,12 +330,12 @@ query T
SELECT x FROM t@t_pkey ORDER BY x
----
NULL
[]
null
"crdb"
1
false
true
[]
[1, 2, 3]
{}
{"a": "b", "c": "d"}
Expand All @@ -346,24 +346,24 @@ SELECT x FROM t@t_pkey ORDER BY x DESC
{"a": "b", "c": "d"}
{}
[1, 2, 3]
[]
true
false
1
"crdb"
null
[]
NULL

# Test to show JSON Null is different from NULL.
query T
SELECT x FROM t@t_pkey WHERE x IS NOT NULL ORDER BY x
----
[]
null
"crdb"
1
false
true
[]
[1, 2, 3]
{}
{"a": "b", "c": "d"}
Expand Down Expand Up @@ -446,12 +446,12 @@ INSERT INTO t VALUES
query T
SELECT x FROM t@i ORDER BY x;
----
[]
null
"crdb"
1
false
true
[]
[null]
[1]
[{"a": "b"}]
Expand Down
14 changes: 7 additions & 7 deletions pkg/sql/opt/exec/execbuilder/testdata/json
Original file line number Diff line number Diff line change
Expand Up @@ -205,7 +205,7 @@ vectorized: true
• scan
missing stats
table: t@t_pkey
spans: [/'null' - /'null'] [/'""' - /'""'] [/'[]' - /'[]'] [/'{}' - /'{}']
spans: [/'[]' - /'[]'] [/'null' - /'null'] [/'""' - /'""'] [/'{}' - /'{}']

# Multicolumn index, including JSONB

Expand Down Expand Up @@ -252,20 +252,20 @@ INSERT INTO composite VALUES (1, '1.00'::JSONB), (2, '1'::JSONB), (3, '2'::JSONB
(4, '3.0'::JSONB), (5, '"a"'::JSONB)
----
CPut /Table/108/1/1/0 -> /TUPLE/
InitPut /Table/108/2/"G*\x02\x00\x00\x89\x88" -> /BYTES/0x2f0f0c200000002000000403348964
InitPut /Table/108/2/"H*\x02\x00\x00\x89\x88" -> /BYTES/0x2f0f0c200000002000000403348964
CPut /Table/108/1/2/0 -> /TUPLE/
InitPut /Table/108/2/"G*\x02\x00\x00\x8a\x88" -> /BYTES/
InitPut /Table/108/2/"H*\x02\x00\x00\x8a\x88" -> /BYTES/
CPut /Table/108/1/3/0 -> /TUPLE/
InitPut /Table/108/2/"G*\x04\x00\x00\x8b\x88" -> /BYTES/
InitPut /Table/108/2/"H*\x04\x00\x00\x8b\x88" -> /BYTES/
CPut /Table/108/1/4/0 -> /TUPLE/
InitPut /Table/108/2/"G*\x06\x00\x00\x8c\x88" -> /BYTES/0x2f0f0c20000000200000040334891e
InitPut /Table/108/2/"H*\x06\x00\x00\x8c\x88" -> /BYTES/0x2f0f0c20000000200000040334891e
CPut /Table/108/1/5/0 -> /TUPLE/
InitPut /Table/108/2/"F\x12a\x00\x01\x00\x8d\x88" -> /BYTES/
InitPut /Table/108/2/"G\x12a\x00\x01\x00\x8d\x88" -> /BYTES/

query T kvtrace
SELECT j FROM composite where j = '1.00'::JSONB
----
Scan /Table/108/2/"G*\x02\x00\x0{0"-1"}
Scan /Table/108/2/"H*\x02\x00\x0{0"-1"}

query T
SELECT j FROM composite ORDER BY j;
Expand Down
2 changes: 1 addition & 1 deletion pkg/sql/rowenc/keyside/json.go
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ func decodeJSONKey(buf []byte, dir encoding.Direction) (json.JSON, []byte, error
}
buf = buf[1:] // removing the terminator
jsonVal = json.FromDecimal(dec)
case encoding.JSONArray, encoding.JSONArrayDesc:
case encoding.JSONArray, encoding.JSONArrayDesc, encoding.JsonEmptyArray, encoding.JsonEmptyArrayDesc:
jsonVal, buf, err = decodeJSONArray(buf, dir)
if err != nil {
return nil, nil, errors.NewAssertionErrorWithWrappedErrf(err, "could not decode JSON Array")
Expand Down
1 change: 1 addition & 0 deletions pkg/sql/rowexec/aggregator_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,7 @@ func aggregations(aggTestSpecs []aggTestSpec) []execinfrapb.AggregatorSpec_Aggre
// VARIANCE
func TestAggregator(t *testing.T) {
defer leaktest.AfterTest(t)()
defer log.Scope(t).Close(t)

var (
col0 = []uint32{0}
Expand Down
Loading

0 comments on commit 69bc4c6

Please sign in to comment.