Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
…chdb#100516 cockroachdb#100527

99858: screl: Add IndexID as a attr of UniqueWithoutIndex element r=Xiang-Gu a=Xiang-Gu

Previously, ALTER TABLE stmt where we add column/drop column/alter PK and adding a unique without index is problematic in that the it can succeed even when there are duplicate values. We already had a dep rule that enforces the new primary index to be backfilled before we validate the constraint against it. Unfortunately, this rule is not enforced on unique without index constraint because IndexID was not a attr of it. This commit fixes this.

Fixes cockroachdb#99281
Epic: None
Release note (bug fix): Fixed a bug in v23.1 in the declarative schema changer where unique without index can be incorrectly added in tables with duplicate values if it's with a ADD/DROP COLUMN in one ALTER TABLE statement.


100357: sql: allow changing the number of histogram samples and buckets r=rytaft a=rytaft

Informs cockroachdb#72418
Informs cockroachdb#97701

Release note (sql change): Added two new cluster settings that enable users to change the number of histogram samples and buckets collected when building histograms as part of table statistics collection: `sql.stats.histogram_samples.count` and `sql.stats.histogram_buckets.count`. While the default values should work for most cases, it may be beneficial to increase the number of samples and buckets for very large tables to avoid creating a histogram that misses important values.

100489: go.mod: bump Pebble to b84a7ec7d8dc r=RaduBerinde a=jbowens

```
b84a7ec7 db: populate return statistics for flushable ingests
5fd58365 objstorage: implement tracing
7f7451f2 db,record: add BatchCommitStats to measure total and component durations for commit
295aaab0 objstorage: implement basic refcounting
```

Epic: None
Release note: None

100516: multiregionccl: reenable TestMrSystemDatabase r=ajwerner a=ajwerner

I stressed this for a long time on many cores and it did not fail.

Epic: none

Fixes: cockroachdb#98039

Release note: None

100527: roachtest: skip multitenant/distsql for now r=yuzefovich a=yuzefovich

Informs: cockroachdb#100260.

Epic: None

Release note: None

Co-authored-by: Xiang Gu <[email protected]>
Co-authored-by: Rebecca Taft <[email protected]>
Co-authored-by: Jackson Owens <[email protected]>
Co-authored-by: Andrew Werner <[email protected]>
Co-authored-by: Yahor Yuzefovich <[email protected]>
  • Loading branch information
6 people committed Apr 3, 2023
6 parents cd933d3 + f2c6653 + 57abe80 + 83da4a0 + 1740018 + 8abbf27 commit cc9e0c6
Show file tree
Hide file tree
Showing 27 changed files with 220 additions and 82 deletions.
6 changes: 3 additions & 3 deletions DEPS.bzl
Original file line number Diff line number Diff line change
Expand Up @@ -1555,10 +1555,10 @@ def go_deps():
patches = [
"@com_github_cockroachdb_cockroach//build/patches:com_github_cockroachdb_pebble.patch",
],
sha256 = "b464f99c3bf962d808dd22ad5022d029d8f01a19deb7b932f3fcdd08a7e32e3f",
strip_prefix = "github.com/cockroachdb/[email protected]20230330185756-53a50a04c2ef",
sha256 = "f282ddeea7d1c18f2acc37c252e6673b6d11c046a1c78bb28106bd7b8feea319",
strip_prefix = "github.com/cockroachdb/[email protected]20230403163348-b84a7ec7d8dc",
urls = [
"https://storage.googleapis.com/cockroach-godeps/gomod/github.com/cockroachdb/pebble/com_github_cockroachdb_pebble-v0.0.0-20230330185756-53a50a04c2ef.zip",
"https://storage.googleapis.com/cockroach-godeps/gomod/github.com/cockroachdb/pebble/com_github_cockroachdb_pebble-v0.0.0-20230403163348-b84a7ec7d8dc.zip",
],
)
go_repository(
Expand Down
2 changes: 1 addition & 1 deletion build/bazelutil/distdir_files.bzl
Original file line number Diff line number Diff line change
Expand Up @@ -311,7 +311,7 @@ DISTDIR_FILES = {
"https://storage.googleapis.com/cockroach-godeps/gomod/github.com/cockroachdb/go-test-teamcity/com_github_cockroachdb_go_test_teamcity-v0.0.0-20191211140407-cff980ad0a55.zip": "bac30148e525b79d004da84d16453ddd2d5cd20528e9187f1d7dac708335674b",
"https://storage.googleapis.com/cockroach-godeps/gomod/github.com/cockroachdb/gostdlib/com_github_cockroachdb_gostdlib-v1.19.0.zip": "c4d516bcfe8c07b6fc09b8a9a07a95065b36c2855627cb3514e40c98f872b69e",
"https://storage.googleapis.com/cockroach-godeps/gomod/github.com/cockroachdb/logtags/com_github_cockroachdb_logtags-v0.0.0-20230118201751-21c54148d20b.zip": "ca7776f47e5fecb4c495490a679036bfc29d95bd7625290cfdb9abb0baf97476",
"https://storage.googleapis.com/cockroach-godeps/gomod/github.com/cockroachdb/pebble/com_github_cockroachdb_pebble-v0.0.0-20230330185756-53a50a04c2ef.zip": "b464f99c3bf962d808dd22ad5022d029d8f01a19deb7b932f3fcdd08a7e32e3f",
"https://storage.googleapis.com/cockroach-godeps/gomod/github.com/cockroachdb/pebble/com_github_cockroachdb_pebble-v0.0.0-20230403163348-b84a7ec7d8dc.zip": "f282ddeea7d1c18f2acc37c252e6673b6d11c046a1c78bb28106bd7b8feea319",
"https://storage.googleapis.com/cockroach-godeps/gomod/github.com/cockroachdb/redact/com_github_cockroachdb_redact-v1.1.3.zip": "7778b1e4485e4f17f35e5e592d87eb99c29e173ac9507801d000ad76dd0c261e",
"https://storage.googleapis.com/cockroach-godeps/gomod/github.com/cockroachdb/returncheck/com_github_cockroachdb_returncheck-v0.0.0-20200612231554-92cdbca611dd.zip": "ce92ba4352deec995b1f2eecf16eba7f5d51f5aa245a1c362dfe24c83d31f82b",
"https://storage.googleapis.com/cockroach-godeps/gomod/github.com/cockroachdb/sentry-go/com_github_cockroachdb_sentry_go-v0.6.1-cockroachdb.2.zip": "fbb2207d02aecfdd411b1357efe1192dbb827959e36b7cab7491731ac55935c9",
Expand Down
2 changes: 2 additions & 0 deletions docs/generated/settings/settings-for-tenants.txt
Original file line number Diff line number Diff line change
Expand Up @@ -261,7 +261,9 @@ sql.stats.cleanup.recurrence string @hourly cron-tab recurrence for SQL Stats cl
sql.stats.flush.enabled boolean true if set, SQL execution statistics are periodically flushed to disk tenant-rw
sql.stats.flush.interval duration 10m0s the interval at which SQL execution statistics are flushed to disk, this value must be less than or equal to 1 hour tenant-rw
sql.stats.forecasts.enabled boolean true when true, enables generation of statistics forecasts by default for all tables tenant-rw
sql.stats.histogram_buckets.count integer 200 maximum number of histogram buckets to build during table statistics collection tenant-rw
sql.stats.histogram_collection.enabled boolean true histogram collection mode tenant-rw
sql.stats.histogram_samples.count integer 10000 number of rows sampled for histogram construction during table statistics collection tenant-rw
sql.stats.multi_column_collection.enabled boolean true multi-column statistics collection mode tenant-rw
sql.stats.non_default_columns.min_retention_period duration 24h0m0s minimum retention period for table statistics collected on non-default columns tenant-rw
sql.stats.persisted_rows.max integer 1000000 maximum number of rows of statement and transaction statistics that will be persisted in the system tables tenant-rw
Expand Down
2 changes: 2 additions & 0 deletions docs/generated/settings/settings.html
Original file line number Diff line number Diff line change
Expand Up @@ -213,7 +213,9 @@
<tr><td><div id="setting-sql-stats-flush-enabled" class="anchored"><code>sql.stats.flush.enabled</code></div></td><td>boolean</td><td><code>true</code></td><td>if set, SQL execution statistics are periodically flushed to disk</td><td>Serverless/Dedicated/Self-Hosted</td></tr>
<tr><td><div id="setting-sql-stats-flush-interval" class="anchored"><code>sql.stats.flush.interval</code></div></td><td>duration</td><td><code>10m0s</code></td><td>the interval at which SQL execution statistics are flushed to disk, this value must be less than or equal to 1 hour</td><td>Serverless/Dedicated/Self-Hosted</td></tr>
<tr><td><div id="setting-sql-stats-forecasts-enabled" class="anchored"><code>sql.stats.forecasts.enabled</code></div></td><td>boolean</td><td><code>true</code></td><td>when true, enables generation of statistics forecasts by default for all tables</td><td>Serverless/Dedicated/Self-Hosted</td></tr>
<tr><td><div id="setting-sql-stats-histogram-buckets-count" class="anchored"><code>sql.stats.histogram_buckets.count</code></div></td><td>integer</td><td><code>200</code></td><td>maximum number of histogram buckets to build during table statistics collection</td><td>Serverless/Dedicated/Self-Hosted</td></tr>
<tr><td><div id="setting-sql-stats-histogram-collection-enabled" class="anchored"><code>sql.stats.histogram_collection.enabled</code></div></td><td>boolean</td><td><code>true</code></td><td>histogram collection mode</td><td>Serverless/Dedicated/Self-Hosted</td></tr>
<tr><td><div id="setting-sql-stats-histogram-samples-count" class="anchored"><code>sql.stats.histogram_samples.count</code></div></td><td>integer</td><td><code>10000</code></td><td>number of rows sampled for histogram construction during table statistics collection</td><td>Serverless/Dedicated/Self-Hosted</td></tr>
<tr><td><div id="setting-sql-stats-multi-column-collection-enabled" class="anchored"><code>sql.stats.multi_column_collection.enabled</code></div></td><td>boolean</td><td><code>true</code></td><td>multi-column statistics collection mode</td><td>Serverless/Dedicated/Self-Hosted</td></tr>
<tr><td><div id="setting-sql-stats-non-default-columns-min-retention-period" class="anchored"><code>sql.stats.non_default_columns.min_retention_period</code></div></td><td>duration</td><td><code>24h0m0s</code></td><td>minimum retention period for table statistics collected on non-default columns</td><td>Serverless/Dedicated/Self-Hosted</td></tr>
<tr><td><div id="setting-sql-stats-persisted-rows-max" class="anchored"><code>sql.stats.persisted_rows.max</code></div></td><td>integer</td><td><code>1000000</code></td><td>maximum number of rows of statement and transaction statistics that will be persisted in the system tables</td><td>Serverless/Dedicated/Self-Hosted</td></tr>
Expand Down
2 changes: 1 addition & 1 deletion go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ require (
github.com/cockroachdb/go-test-teamcity v0.0.0-20191211140407-cff980ad0a55
github.com/cockroachdb/gostdlib v1.19.0
github.com/cockroachdb/logtags v0.0.0-20230118201751-21c54148d20b
github.com/cockroachdb/pebble v0.0.0-20230330185756-53a50a04c2ef
github.com/cockroachdb/pebble v0.0.0-20230403163348-b84a7ec7d8dc
github.com/cockroachdb/redact v1.1.3
github.com/cockroachdb/returncheck v0.0.0-20200612231554-92cdbca611dd
github.com/cockroachdb/stress v0.0.0-20220803192808-1806698b1b7b
Expand Down
4 changes: 2 additions & 2 deletions go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -479,8 +479,8 @@ github.com/cockroachdb/gostdlib v1.19.0/go.mod h1:+dqqpARXbE/gRDEhCak6dm0l14AaTy
github.com/cockroachdb/logtags v0.0.0-20211118104740-dabe8e521a4f/go.mod h1:Vz9DsVWQQhf3vs21MhPMZpMGSht7O/2vFW2xusFUVOs=
github.com/cockroachdb/logtags v0.0.0-20230118201751-21c54148d20b h1:r6VH0faHjZeQy818SGhaone5OnYfxFR/+AzdY3sf5aE=
github.com/cockroachdb/logtags v0.0.0-20230118201751-21c54148d20b/go.mod h1:Vz9DsVWQQhf3vs21MhPMZpMGSht7O/2vFW2xusFUVOs=
github.com/cockroachdb/pebble v0.0.0-20230330185756-53a50a04c2ef h1:tUK4xPngXR/IA6Qwyp3WUPsC0jxlE0FO2rysjgYiZA0=
github.com/cockroachdb/pebble v0.0.0-20230330185756-53a50a04c2ef/go.mod h1:9lRMC4XN3/BLPtIp6kAKwIaHu369NOf2rMucPzipz50=
github.com/cockroachdb/pebble v0.0.0-20230403163348-b84a7ec7d8dc h1:JvaHl6Zd/1rLIJ/sJAkEGCsyFRRp5Lh5nMAZzUnftZc=
github.com/cockroachdb/pebble v0.0.0-20230403163348-b84a7ec7d8dc/go.mod h1:9lRMC4XN3/BLPtIp6kAKwIaHu369NOf2rMucPzipz50=
github.com/cockroachdb/redact v1.1.3 h1:AKZds10rFSIj7qADf0g46UixK8NNLwWTNdCIGS5wfSQ=
github.com/cockroachdb/redact v1.1.3/go.mod h1:BVNblN9mBWFyMyqK1k3AAiSxhvhfK2oOZZ2lK+dpvRg=
github.com/cockroachdb/returncheck v0.0.0-20200612231554-92cdbca611dd h1:KFOt5I9nEKZgCnOSmy8r4Oykh8BYQO8bFOTgHDS8YZA=
Expand Down
3 changes: 0 additions & 3 deletions pkg/ccl/multiregionccl/multiregion_system_table_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,6 @@ import (
"github.com/cockroachdb/cockroach/pkg/sql/sqlliveness/slstorage"
"github.com/cockroachdb/cockroach/pkg/testutils"
"github.com/cockroachdb/cockroach/pkg/testutils/serverutils"
"github.com/cockroachdb/cockroach/pkg/testutils/skip"
"github.com/cockroachdb/cockroach/pkg/testutils/sqlutils"
"github.com/cockroachdb/cockroach/pkg/util/leaktest"
"github.com/cockroachdb/cockroach/pkg/util/log"
Expand All @@ -39,8 +38,6 @@ func TestMrSystemDatabase(t *testing.T) {
defer leaktest.AfterTest(t)()
defer log.Scope(t).Close(t)

skip.WithIssue(t, 98039, "flaky test")

ctx := context.Background()

// Enable settings required for configuring a tenant's system database as multi-region.
Expand Down
1 change: 1 addition & 0 deletions pkg/cmd/roachtest/tests/multitenant_distsql.go
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@ func registerMultiTenantDistSQL(r registry.Registry) {
b := bundle
to := timeout
r.Add(registry.TestSpec{
Skip: "the test is skipped until #100260 is resolved",
Name: fmt.Sprintf("multitenant/distsql/instances=%d/bundle=%s/timeout=%d", numInstances, b, to),
Owner: registry.OwnerSQLQueries,
Cluster: r.MakeClusterSpec(4),
Expand Down
20 changes: 12 additions & 8 deletions pkg/sql/create_stats.go
Original file line number Diff line number Diff line change
Expand Up @@ -66,9 +66,9 @@ const nonIndexColHistogramBuckets = 2
// StubTableStats generates "stub" statistics for a table which are missing
// histograms and have 0 for all values.
func StubTableStats(
desc catalog.TableDescriptor, name string, multiColEnabled bool,
desc catalog.TableDescriptor, name string, multiColEnabled bool, defaultHistogramBuckets uint32,
) ([]*stats.TableStatisticProto, error) {
colStats, err := createStatsDefaultColumns(desc, multiColEnabled)
colStats, err := createStatsDefaultColumns(desc, multiColEnabled, defaultHistogramBuckets)
if err != nil {
return nil, err
}
Expand Down Expand Up @@ -272,7 +272,10 @@ func (n *createStatsNode) makeJobRecord(ctx context.Context) (*jobs.Record, erro
multiColEnabled = stats.MultiColumnStatisticsClusterMode.Get(&n.p.ExecCfg().Settings.SV)
deleteOtherStats = true
}
if colStats, err = createStatsDefaultColumns(tableDesc, multiColEnabled); err != nil {
defaultHistogramBuckets := uint32(stats.DefaultHistogramBuckets.Get(n.p.ExecCfg().SV()))
if colStats, err = createStatsDefaultColumns(
tableDesc, multiColEnabled, defaultHistogramBuckets,
); err != nil {
return nil, err
}
} else {
Expand Down Expand Up @@ -300,20 +303,21 @@ func (n *createStatsNode) makeJobRecord(ctx context.Context) (*jobs.Record, erro
// STATISTICS or other SQL on table_statistics.
_ = stats.MakeSortedColStatKey(columnIDs)
isInvIndex := colinfo.ColumnTypeIsOnlyInvertedIndexable(col.GetType())
defaultHistogramBuckets := uint32(stats.DefaultHistogramBuckets.Get(n.p.ExecCfg().SV()))
colStats = []jobspb.CreateStatsDetails_ColStat{{
ColumnIDs: columnIDs,
// By default, create histograms on all explicitly requested column stats
// with a single column that doesn't use an inverted index.
HasHistogram: len(columnIDs) == 1 && !isInvIndex,
HistogramMaxBuckets: stats.DefaultHistogramBuckets,
HistogramMaxBuckets: defaultHistogramBuckets,
}}
// Make histograms for inverted index column types.
if len(columnIDs) == 1 && isInvIndex {
colStats = append(colStats, jobspb.CreateStatsDetails_ColStat{
ColumnIDs: columnIDs,
HasHistogram: true,
Inverted: true,
HistogramMaxBuckets: stats.DefaultHistogramBuckets,
HistogramMaxBuckets: defaultHistogramBuckets,
})
}
}
Expand Down Expand Up @@ -382,7 +386,7 @@ const maxNonIndexCols = 100
// other columns from the table. We only collect histograms for index columns,
// plus any other boolean or enum columns (where the "histogram" is tiny).
func createStatsDefaultColumns(
desc catalog.TableDescriptor, multiColEnabled bool,
desc catalog.TableDescriptor, multiColEnabled bool, defaultHistogramBuckets uint32,
) ([]jobspb.CreateStatsDetails_ColStat, error) {
colStats := make([]jobspb.CreateStatsDetails_ColStat, 0, len(desc.ActiveIndexes()))

Expand Down Expand Up @@ -428,7 +432,7 @@ func createStatsDefaultColumns(
colStat := jobspb.CreateStatsDetails_ColStat{
ColumnIDs: colIDs,
HasHistogram: !isInverted,
HistogramMaxBuckets: stats.DefaultHistogramBuckets,
HistogramMaxBuckets: defaultHistogramBuckets,
}
colStats = append(colStats, colStat)

Expand Down Expand Up @@ -570,7 +574,7 @@ func createStatsDefaultColumns(
// for those types, up to DefaultHistogramBuckets.
maxHistBuckets := uint32(nonIndexColHistogramBuckets)
if col.GetType().Family() == types.BoolFamily || col.GetType().Family() == types.EnumFamily {
maxHistBuckets = stats.DefaultHistogramBuckets
maxHistBuckets = defaultHistogramBuckets
}
colStats = append(colStats, jobspb.CreateStatsDetails_ColStat{
ColumnIDs: colIDs,
Expand Down
17 changes: 14 additions & 3 deletions pkg/sql/distsql_plan_stats.go
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ package sql

import (
"context"
"math"
"time"

"github.com/cockroachdb/cockroach/pkg/jobs"
Expand Down Expand Up @@ -43,7 +44,16 @@ type requestedStat struct {
inverted bool
}

const histogramSamples = 10000
// histogramSamples is the number of sample rows to be collected for histogram
// construction. For larger tables, it may be beneficial to increase this number
// to get a more accurate distribution.
var histogramSamples = settings.RegisterIntSetting(
settings.TenantWritable,
"sql.stats.histogram_samples.count",
"number of rows sampled for histogram construction during table statistics collection",
10000,
settings.NonNegativeIntWithMaximum(math.MaxUint32),
).WithPublic()

// maxTimestampAge is the maximum allowed age of a scan timestamp during table
// stats collection, used when creating statistics AS OF SYSTEM TIME. The
Expand Down Expand Up @@ -79,7 +89,7 @@ func (dsp *DistSQLPlanner) createAndAttachSamplers(
// since we only support one reqStat at a time.
for _, s := range reqStats {
if s.histogram {
sampler.SampleSize = histogramSamples
sampler.SampleSize = uint32(histogramSamples.Get(&dsp.st.SV))
// This could be anything >= 2 to produce a histogram, but the max number
// of buckets is probably also a reasonable minimum number of samples. (If
// there are fewer rows than this in the table, there will be fewer
Expand Down Expand Up @@ -469,9 +479,10 @@ func (dsp *DistSQLPlanner) createPlanForCreateStats(
) (*PhysicalPlan, error) {
reqStats := make([]requestedStat, len(details.ColumnStats))
histogramCollectionEnabled := stats.HistogramClusterMode.Get(&dsp.st.SV)
defaultHistogramBuckets := uint32(stats.DefaultHistogramBuckets.Get(&dsp.st.SV))
for i := 0; i < len(reqStats); i++ {
histogram := details.ColumnStats[i].HasHistogram && histogramCollectionEnabled
var histogramMaxBuckets uint32 = stats.DefaultHistogramBuckets
var histogramMaxBuckets = defaultHistogramBuckets
if details.ColumnStats[i].HistogramMaxBuckets > 0 {
histogramMaxBuckets = details.ColumnStats[i].HistogramMaxBuckets
}
Expand Down
5 changes: 4 additions & 1 deletion pkg/sql/importer/import_job.go
Original file line number Diff line number Diff line change
Expand Up @@ -1036,7 +1036,10 @@ func (r *importResumer) writeStubStatisticsForImportedTables(
// single-column stats to avoid the appearance of perfectly correlated
// columns.
multiColEnabled := false
statistics, err := sql.StubTableStats(desc, jobspb.ImportStatsName, multiColEnabled)
defaultHistogramBuckets := uint32(stats.DefaultHistogramBuckets.Get(execCfg.SV()))
statistics, err := sql.StubTableStats(
desc, jobspb.ImportStatsName, multiColEnabled, defaultHistogramBuckets,
)
if err == nil {
for _, statistic := range statistics {
statistic.RowCount = rowCount
Expand Down
48 changes: 48 additions & 0 deletions pkg/sql/logictest/testdata/logic_test/alter_table
Original file line number Diff line number Diff line change
Expand Up @@ -3194,3 +3194,51 @@ subtest alter_non_existent_table_with_if_exists

statement ok
ALTER TABLE IF EXISTS t_non_existent_99185 ADD FOREIGN KEY (i) REFERENCES t_other_99185 (i);

# This subtest tests behavior when we have add/drop column and add constraint in one stmt.
subtest 99281

statement ok
SET experimental_enable_unique_without_index_constraints = true;
CREATE TABLE t_99281 (i INT PRIMARY KEY, j INT NOT NULL, k INT NOT NULL, FAMILY "primary" (i,j,k));
INSERT INTO t_99281 VALUES (0,0,0), (1,0,1);

statement error pq: could not create unique constraint ".*"
ALTER TABLE t_99281 ADD UNIQUE WITHOUT INDEX (j);

statement error pq: could not create unique constraint ".*"
ALTER TABLE t_99281 ADD COLUMN p INT DEFAULT unique_rowid(), ADD UNIQUE WITHOUT INDEX (j);

# The following statement will cause the stmt to hang using the legacy schema changer.
skipif config local-legacy-schema-changer
skipif config local-mixed-22.2-23.1
statement error pq: could not create unique constraint ".*"
ALTER TABLE t_99281 DROP COLUMN k, ADD UNIQUE WITHOUT INDEX (j);

statement error pq: validation of CHECK "i > 0:::INT8" failed on row: i=0, j=0, k=0, p=[0-9]+
ALTER TABLE t_99281 ADD COLUMN p INT DEFAULT unique_rowid(), ADD CHECK (i > 0);

statement error pq: validation of CHECK "j > 0:::INT8" failed on row: i=[0-1], j=0, k=[0-1], p=[0-9]+
ALTER TABLE t_99281 ADD COLUMN p INT DEFAULT unique_rowid(), ADD CHECK (i >= 0), ADD CHECK (j > 0);

statement ok
CREATE TABLE t_99281_other (i INT PRIMARY KEY);

statement error pq: foreign key violation: "t_99281" row j=0, i=[0-1] has no match in "t_99281_other"
ALTER TABLE t_99281 ADD COLUMN p INT DEFAULT unique_rowid(), ADD FOREIGN KEY (j) REFERENCES t_99281_other;

# The following statement is not supported using the legacy schema changer.
skipif config local-legacy-schema-changer
skipif config local-mixed-22.2-23.1
statement error pq: foreign key violation: "t_99281" row j=0, i=[0-1] has no match in "t_99281_other"
ALTER TABLE t_99281 ALTER PRIMARY KEY USING COLUMNS (k), ADD FOREIGN KEY (j) REFERENCES t_99281_other;

query TT
show create table t_99281
----
t_99281 CREATE TABLE public.t_99281 (
i INT8 NOT NULL,
j INT8 NOT NULL,
k INT8 NOT NULL,
CONSTRAINT t_99281_pkey PRIMARY KEY (i ASC)
)
63 changes: 63 additions & 0 deletions pkg/sql/logictest/testdata/logic_test/distsql_stats
Original file line number Diff line number Diff line change
Expand Up @@ -121,6 +121,69 @@ s1 {a} 256 4 0 true
let $json_stats
SHOW STATISTICS USING JSON FOR TABLE data

# Verify that we can control the number of samples and buckets collected.
statement ok
SET CLUSTER SETTING sql.stats.histogram_buckets.count = 3

statement ok
CREATE STATISTICS s2 ON a FROM data

let $hist_id_2
SELECT histogram_id FROM [SHOW STATISTICS FOR TABLE data] WHERE statistics_name = 's2'

query TIRI colnames
SHOW HISTOGRAM $hist_id_2
----
upper_bound range_rows distinct_range_rows equal_rows
1 0 0 64
3 64 1 64
4 0 0 64

# We can verify the number of samples collected based on the number of
# buckets produced.
statement ok
SET CLUSTER SETTING sql.stats.histogram_buckets.count = 20000

statement ok
SET CLUSTER SETTING sql.stats.histogram_samples.count = 20000

statement ok
CREATE TABLE big (i INT PRIMARY KEY);
INSERT INTO big SELECT generate_series(1, 20000)

statement ok
CREATE STATISTICS s20000 FROM big

let $hist_id_20000
SELECT histogram_id FROM [SHOW STATISTICS FOR TABLE big] WHERE statistics_name = 's20000'

query I
SELECT count(*) FROM [SHOW HISTOGRAM $hist_id_20000]
----
20000

statement ok
SET CLUSTER SETTING sql.stats.histogram_samples.count = 500

statement ok
CREATE STATISTICS s500 FROM big

let $hist_id_500
SELECT histogram_id FROM [SHOW STATISTICS FOR TABLE big] WHERE statistics_name = 's500'

# Perform integer division by 10 because there may be 2 extra buckets added
# on either end of the histogram to account for the 20000 distinct values.
query I
SELECT (count(*) // 10) * 10 FROM [SHOW HISTOGRAM $hist_id_500]
----
500

statement ok
RESET CLUSTER SETTING sql.stats.histogram_buckets.count

statement ok
RESET CLUSTER SETTING sql.stats.histogram_samples.count

# ANALYZE is syntactic sugar for CREATE STATISTICS with default columns.
statement ok
ANALYZE data
Expand Down
Loading

0 comments on commit cc9e0c6

Please sign in to comment.