Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
68128: sql: settings from ALTER ROLE ... SET apply on session init r=RichardJCai a=rafiss

Fixes #21151

The default settings are applied when creating a new pgwire session. The
information is cached alongside authentication info to reduce the
latency of establishing a connection and minimize the chance of a login
availability hit.

Release note (sql change): Default session variable settings configured
by `ALTER ROLE ... SET` are now supported. The following order of
precedence is used for variable settings:

1. settings specified in the connection URL as a query parameter.
2. per-role and per-database setting configured by ALTER ROLE.
3. per-role and all-database setting configured by ALTER ROLE.
4. all-role and per-database setting configured by ALTER ROLE.
5. all-role and all-database setting configured by ALTER ROLE.

RESET does not validate the setting name.
SET validates both the name and the proposed default value.

Note that the default settings for a role are not inherited if one role
is a member of another role that has default settings. Also, the
defaults _only_ apply during session initialization. Using `SET
DATABASE` to change databases does not apply default settings for that
database.

The `public`, `admin`, and `root` roles cannot have default session
variables configured. The `root` role also will never use the "all-role"
default settings. This is so that `root` has fewer dependencies during
session initialization and to make it less likely for `root`
authentication to become unavailable during the loss of a node.

Changing the default settings for a role requires the role running the
ALTER command to either be an ADMIN or to have the CREATEROLE role
option. Only ADMINs can edit the default settings for another admin.
Futhermore, changing the default settings for `ALL` roles is
_only_ allowed for ADMINs. Roles without ADMIN or CREATEROLE _cannot_
change the default settings for themselves.

68284: cli/zip: avoid quadratic behavior in SQL schema retrieval r=ajwerner a=knz

Fixes #67515.

Prior to this patch, the `debug zip` command would issue a
`DatabaseDetails` request over the network for each database in turn,
and also `TableDetails` for each table.

Server-side, each of these individual requests would cause all
descriptors to be loaded into memory first.

The overall behavior was thus quadratic in performance, and was
resulting in unacceptable delays on large clusters with thousands of
descriptors.

It turns out, none of this is necessary since we already collect the
descriptor details via `system.descriptors` and can collect the CREATE
statements trivially using the `crdb_internal.create_statements`
vtable, which also has linear performance.

Release note (cli change): `cockroach debug zip` no longer retrieves
database and table details into separate files. The schema information
is collected by means of `system.descriptors` and `crdb_internal.create_statements`.

68295: sql: add cluster setting to preserve subquery and cte ordering r=DrewKimball a=DrewKimball

This patch adds an experimental cluster setting `propagate_input_ordering`
that indicates whether subqueries and CTEs should propagate their orderings
to their parent scopes, when the parent scope is unordered. As an example,
the following two queries should produce the following result when the
cluster setting is true:
```
select * from (select * from generate_series(1, 10) i order by i % 5 asc, i asc) tmp;
with tmp as (select * from generate_series(1, 10) i order by i % 5 asc, i asc) select * from tmp;
----
   5
  10
   1
   6
   2
   7
   3
   8
   4
   9
```
This allows cockroach to imitate postgres behavior - while postgres
does not guarantee to maintain ordering on subqueries, it does in
practice. Some existing applications take advantage of this fact,
and so the ability to toggle this setting can help resolve
incompatibilities in some cases.

Fixes #68211

Release note: None

68459: sql: refactor default privileges for all roles r=arulajmani a=RichardJCai

Default privileges for all roles are refactored to be
more in line with regular roles.

Simplifies some code.

Release note: None

68514: dev: rip recording logic out of `dev` datadriven tests r=rail a=rickystewart

This feature isn't used by anyone that currently works on `dev`, and
manually updating the recording files seems to be the most convenient
way to work on these tests.

In the future we may consider migrating to a proper mocking framework
for `dev`, but for now this structure is fine.

Also rename recorder -> recording for accuracy.

Release note: None

68526: *: update to lib/pq v1.10.2 r=knz a=rafiss

Fixes #65320

This gets us off the cockroachdb/pq fork.

This revealed a bug in some of the tests and `debug` code: they were not calling
rows.Close() after querying the database. The new version of
lib/pq is more strict about requiring this.

Additionally, logic tests were not checking `NextResultSet()`, which is
a bug if the test executes multiple queries at once.

Release note: none

Co-authored-by: Rafi Shamim <[email protected]>
Co-authored-by: Raphael 'kena' Poss <[email protected]>
Co-authored-by: Drew Kimball <[email protected]>
Co-authored-by: richardjcai <[email protected]>
Co-authored-by: Ricky Stewart <[email protected]>
  • Loading branch information
6 people committed Aug 6, 2021
7 parents 6fafc56 + 617e51e + 9f0747f + f983edb + c69c62b + 9697387 + 4740eb8 commit c19b1e3
Show file tree
Hide file tree
Showing 75 changed files with 2,069 additions and 1,624 deletions.
5 changes: 2 additions & 3 deletions DEPS.bzl
Original file line number Diff line number Diff line change
Expand Up @@ -2603,9 +2603,8 @@ def go_deps():
name = "com_github_lib_pq",
build_file_proto_mode = "disable_global",
importpath = "github.com/lib/pq",
replace = "github.com/cockroachdb/pq",
sum = "h1:xTc0ViFhuelzQZAYQOxMR2J5QDO9/C+0L0fkPXIcoMI=",
version = "v0.0.0-20210517091544-990dd3347596",
sum = "h1:AqzbZs4ZoCBp+GtejcpCpcxM3zlSMx29dXbUSeVtJb8=",
version = "v1.10.2",
)
go_repository(
name = "com_github_lib_pq_auth_kerberos",
Expand Down
5 changes: 1 addition & 4 deletions go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ require (
github.com/kr/pretty v0.2.1
github.com/kr/text v0.2.0
github.com/leanovate/gopter v0.2.5-0.20190402064358-634a59d12406
github.com/lib/pq v1.8.0
github.com/lib/pq v1.10.2
github.com/lib/pq/auth/kerberos v0.0.0-20200720160335-984a6aa1ca46
github.com/lightstep/lightstep-tracer-go v0.24.0
github.com/linkedin/goavro/v2 v2.10.0
Expand Down Expand Up @@ -192,9 +192,6 @@ replace gopkg.in/yaml.v2 => github.com/cockroachdb/yaml v0.0.0-20180705215940-0e

replace github.com/knz/go-libedit => github.com/otan-cockroach/go-libedit v1.10.2-0.20201030151939-7cced08450e7

// Temporary workaround for #65320.
replace github.com/lib/pq => github.com/cockroachdb/pq v0.0.0-20210517091544-990dd3347596

// At the time of writing (i.e. as of this version below) the `etcd` repo is in the process of properly introducing
// modules, and as part of that uses an unsatisfiable version for this dependency (v3.0.0-00010101000000-000000000000).
// We just force it to the same SHA as the `go.etcd.io/etcd/raft/v3` module (they live in the same VCS root).
Expand Down
9 changes: 7 additions & 2 deletions go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -267,8 +267,6 @@ github.com/cockroachdb/logtags v0.0.0-20190617123548-eb05cc24525f h1:o/kfcElHqOi
github.com/cockroachdb/logtags v0.0.0-20190617123548-eb05cc24525f/go.mod h1:i/u985jwjWRlyHXQbwatDASoW0RMlZ/3i9yJHE2xLkI=
github.com/cockroachdb/pebble v0.0.0-20210719141320-8c3bd06debb5 h1:Igd6YmtOZ77EgLAIaE9+mHl7+sAKaZ5m4iMI0Dz/J2A=
github.com/cockroachdb/pebble v0.0.0-20210719141320-8c3bd06debb5/go.mod h1:JXfQr3d+XO4bL1pxGwKKo09xylQSdZ/mpZ9b2wfVcPs=
github.com/cockroachdb/pq v0.0.0-20210517091544-990dd3347596 h1:xTc0ViFhuelzQZAYQOxMR2J5QDO9/C+0L0fkPXIcoMI=
github.com/cockroachdb/pq v0.0.0-20210517091544-990dd3347596/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/cockroachdb/redact v1.0.8/go.mod h1:BVNblN9mBWFyMyqK1k3AAiSxhvhfK2oOZZ2lK+dpvRg=
github.com/cockroachdb/redact v1.1.0/go.mod h1:BVNblN9mBWFyMyqK1k3AAiSxhvhfK2oOZZ2lK+dpvRg=
github.com/cockroachdb/redact v1.1.3 h1:AKZds10rFSIj7qADf0g46UixK8NNLwWTNdCIGS5wfSQ=
Expand Down Expand Up @@ -944,6 +942,13 @@ github.com/labstack/echo/v4 v4.1.11/go.mod h1:i541M3Fj6f76NZtHSj7TXnyM8n2gaodfvf
github.com/labstack/gommon v0.3.0/go.mod h1:MULnywXg0yavhxWKc+lOruYdAhDwPK9wf0OL7NoOu+k=
github.com/leanovate/gopter v0.2.5-0.20190402064358-634a59d12406 h1:+OUpk+IVvmKU0jivOVFGtOzA6U5AWFs8HE4DRzWLOUE=
github.com/leanovate/gopter v0.2.5-0.20190402064358-634a59d12406/go.mod h1:gNcbPWNEWRe4lm+bycKqxUYoH5uoVje5SkOJ3uoLer8=
github.com/lib/pq v0.0.0-20180327071824-d34b9ff171c2/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/lib/pq v1.0.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/lib/pq v1.1.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/lib/pq v1.2.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/lib/pq v1.8.0/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/lib/pq v1.10.2 h1:AqzbZs4ZoCBp+GtejcpCpcxM3zlSMx29dXbUSeVtJb8=
github.com/lib/pq v1.10.2/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/lib/pq/auth/kerberos v0.0.0-20200720160335-984a6aa1ca46 h1:q7hY+WNJTcSqJNGwJzXZYL++nWBaoKlKdgZOyY6jxz4=
github.com/lib/pq/auth/kerberos v0.0.0-20200720160335-984a6aa1ca46/go.mod h1:jydegJvs5JvVcuFD/YAT8JRmRVeOoRhtnGEgRnAoPpE=
github.com/lightstep/lightstep-tracer-common/golang/gogo v0.0.0-20200305213919-a88bf8de3718 h1:lrdADj7ifyBpqGJ+cT4vE5ztUoAF87uUf76+epwPViY=
Expand Down
1 change: 1 addition & 0 deletions pkg/ccl/gssapiccl/BUILD.bazel
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ go_library(
"//pkg/ccl/utilccl",
"//pkg/security",
"//pkg/sql",
"//pkg/sql/sem/tree",
"//pkg/sql/pgwire",
"//pkg/sql/pgwire/hba",
"@com_github_cockroachdb_errors//:errors",
Expand Down
3 changes: 2 additions & 1 deletion pkg/ccl/gssapiccl/gssapi.go
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ import (
"github.com/cockroachdb/cockroach/pkg/sql"
"github.com/cockroachdb/cockroach/pkg/sql/pgwire"
"github.com/cockroachdb/cockroach/pkg/sql/pgwire/hba"
"github.com/cockroachdb/cockroach/pkg/sql/sem/tree"
"github.com/cockroachdb/errors"
)

Expand All @@ -46,7 +47,7 @@ func authGSS(
c pgwire.AuthConn,
tlsState tls.ConnectionState,
_ pgwire.PasswordRetrievalFn,
_ pgwire.PasswordValidUntilFn,
_ *tree.DTimestamp,
execCfg *sql.ExecutorConfig,
entry *hba.Entry,
) (security.UserAuthHook, error) {
Expand Down
14 changes: 3 additions & 11 deletions pkg/ccl/serverccl/role_authentication_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,9 @@ func TestVerifyPassword(t *testing.T) {
t.Run("", func(t *testing.T) {
execCfg := s.ExecutorConfig().(sql.ExecutorConfig)
username := security.MakeSQLUsernameFromPreNormalizedString(tc.username)
exists, canLogin, pwRetrieveFn, validUntilFn, err := sql.GetUserHashedPassword(context.Background(), &execCfg, &ie, username)
exists, canLogin, validUntil, _, pwRetrieveFn, err := sql.GetUserSessionInitInfo(
context.Background(), &execCfg, &ie, username, "", /* databaseName */
)

if err != nil {
t.Errorf(
Expand Down Expand Up @@ -148,16 +150,6 @@ func TestVerifyPassword(t *testing.T) {
valid = false
}

validUntil, err := validUntilFn(ctx)
if err != nil {
t.Errorf(
"credentials %s/%s failed with error %s, wanted no error",
tc.username,
tc.password,
err,
)
}

if validUntil != nil {
if validUntil.Time.Sub(timeutil.Now()) < 0 {
expired = true
Expand Down
12 changes: 8 additions & 4 deletions pkg/cli/doctor.go
Original file line number Diff line number Diff line change
Expand Up @@ -190,8 +190,8 @@ func fromCluster(
stmt := `
SELECT id, descriptor, crdb_internal_mvcc_timestamp AS mod_time_logical
FROM system.descriptor ORDER BY id`
checkColumnExistsStmt := "SELECT crdb_internal_mvcc_timestamp"
_, err := sqlConn.Query(maybePrint(checkColumnExistsStmt), nil)
checkColumnExistsStmt := "SELECT crdb_internal_mvcc_timestamp FROM system.descriptor LIMIT 1"
_, err := sqlConn.QueryRow(maybePrint(checkColumnExistsStmt), nil)
// On versions before 20.2, the system.descriptor won't have the builtin
// crdb_internal_mvcc_timestamp. If we can't find it, use NULL instead.
if pqErr := (*pq.Error)(nil); errors.As(err, &pqErr) {
Expand All @@ -200,6 +200,8 @@ FROM system.descriptor ORDER BY id`
SELECT id, descriptor, NULL AS mod_time_logical
FROM system.descriptor ORDER BY id`
}
} else if err != nil {
return nil, nil, nil, err
}
descTable = make([]doctor.DescriptorTableRow, 0)

Expand Down Expand Up @@ -238,8 +240,8 @@ FROM system.descriptor ORDER BY id`

stmt = `SELECT "parentID", "parentSchemaID", name, id FROM system.namespace`

checkColumnExistsStmt = `SELECT "parentSchemaID" FROM system.namespace LIMIT 0`
_, err = sqlConn.Query(maybePrint(checkColumnExistsStmt), nil)
checkColumnExistsStmt = `SELECT "parentSchemaID" FROM system.namespace LIMIT 1`
_, err = sqlConn.QueryRow(maybePrint(checkColumnExistsStmt), nil)
// On versions before 20.1, table system.namespace does not have this column.
// In that case the ParentSchemaID for tables is 29 and for databases is 0.
if pqErr := (*pq.Error)(nil); errors.As(err, &pqErr) {
Expand All @@ -248,6 +250,8 @@ FROM system.descriptor ORDER BY id`
SELECT "parentID", CASE WHEN "parentID" = 0 THEN 0 ELSE 29 END AS "parentSchemaID", name, id
FROM system.namespace`
}
} else if err != nil {
return nil, nil, nil, err
}

namespaceTable = make([]doctor.NamespaceTableRow, 0)
Expand Down
46 changes: 2 additions & 44 deletions pkg/cli/testdata/zip/partial1
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,8 @@ debug zip --concurrency=1 --cpu-profile-duration=0s /dev/null
[cluster] retrieving SQL data for system.descriptor... writing output: debug/system.descriptor.txt... done
[cluster] retrieving SQL data for system.namespace... writing output: debug/system.namespace.txt... done
[cluster] retrieving SQL data for system.scheduled_jobs... writing output: debug/system.scheduled_jobs.txt... done
[cluster] retrieving SQL data for "".crdb_internal.create_statements... writing output: debug/crdb_internal.create_statements.txt... done
[cluster] retrieving SQL data for "".crdb_internal.create_type_statements... writing output: debug/crdb_internal.create_type_statements.txt... done
[cluster] retrieving SQL data for crdb_internal.kv_node_liveness... writing output: debug/crdb_internal.kv_node_liveness.txt... done
[cluster] retrieving SQL data for crdb_internal.kv_node_status... writing output: debug/crdb_internal.kv_node_status.txt... done
[cluster] retrieving SQL data for crdb_internal.kv_store_status... writing output: debug/crdb_internal.kv_store_status.txt... done
Expand Down Expand Up @@ -272,49 +274,5 @@ debug zip --concurrency=1 --cpu-profile-duration=0s /dev/null
[node 3] writing range 40... converting to JSON... writing binary output: debug/nodes/3/ranges/40.json... done
[node 3] writing range 41... converting to JSON... writing binary output: debug/nodes/3/ranges/41.json... done
[node 3] writing range 42... converting to JSON... writing binary output: debug/nodes/3/ranges/42.json... done
[cluster] doctor examining cluster...... writing binary output: debug/reports/doctor.txt... done
[cluster] requesting list of SQL databases... received response... done
[cluster] 3 databases found
[cluster] [database: defaultdb] requesting database details... received response... converting to JSON... writing binary output: debug/schema/[email protected]... done
[cluster] [database: defaultdb] 0 tables found
[cluster] [database: postgres] requesting database details... received response... converting to JSON... writing binary output: debug/schema/[email protected]... done
[cluster] [database: postgres] 0 tables found
[cluster] [database: system] requesting database details... received response... converting to JSON... writing binary output: debug/schema/[email protected]... done
[cluster] [database: system] 35 tables found
[cluster] [database: system] [table: public.comments] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_comments.json... done
[cluster] [database: system] [table: public.database_role_settings] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_database_role_settings.json... done
[cluster] [database: system] [table: public.descriptor] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_descriptor.json... done
[cluster] [database: system] [table: public.eventlog] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_eventlog.json... done
[cluster] [database: system] [table: public.jobs] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_jobs.json... done
[cluster] [database: system] [table: public.join_tokens] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_join_tokens.json... done
[cluster] [database: system] [table: public.lease] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_lease.json... done
[cluster] [database: system] [table: public.locations] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_locations.json... done
[cluster] [database: system] [table: public.migrations] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_migrations.json... done
[cluster] [database: system] [table: public.namespace] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_namespace.json... done
[cluster] [database: system] [table: public.protected_ts_meta] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_protected_ts_meta.json... done
[cluster] [database: system] [table: public.protected_ts_records] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_protected_ts_records.json... done
[cluster] [database: system] [table: public.rangelog] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_rangelog.json... done
[cluster] [database: system] [table: public.replication_constraint_stats] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_replication_constraint_stats.json... done
[cluster] [database: system] [table: public.replication_critical_localities] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_replication_critical_localities.json... done
[cluster] [database: system] [table: public.replication_stats] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_replication_stats.json... done
[cluster] [database: system] [table: public.reports_meta] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_reports_meta.json... done
[cluster] [database: system] [table: public.role_members] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_role_members.json... done
[cluster] [database: system] [table: public.role_options] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_role_options.json... done
[cluster] [database: system] [table: public.scheduled_jobs] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_scheduled_jobs.json... done
[cluster] [database: system] [table: public.settings] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_settings.json... done
[cluster] [database: system] [table: public.sql_instances] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_sql_instances.json... done
[cluster] [database: system] [table: public.sqlliveness] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_sqlliveness.json... done
[cluster] [database: system] [table: public.statement_bundle_chunks] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_statement_bundle_chunks.json... done
[cluster] [database: system] [table: public.statement_diagnostics] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_statement_diagnostics.json... done
[cluster] [database: system] [table: public.statement_diagnostics_requests] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_statement_diagnostics_requests.json... done
[cluster] [database: system] [table: public.statement_statistics] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_statement_statistics.json... done
[cluster] [database: system] [table: public.table_statistics] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_table_statistics.json... done
[cluster] [database: system] [table: public.tenant_usage] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_tenant_usage.json... done
[cluster] [database: system] [table: public.tenants] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_tenants.json... done
[cluster] [database: system] [table: public.transaction_statistics] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_transaction_statistics.json... done
[cluster] [database: system] [table: public.ui] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_ui.json... done
[cluster] [database: system] [table: public.users] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_users.json... done
[cluster] [database: system] [table: public.web_sessions] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_web_sessions.json... done
[cluster] [database: system] [table: public.zones] requesting table details... received response... converting to JSON... writing binary output: debug/schema/system/public_zones.json... done
[cluster] pprof summary script... writing binary output: debug/pprof-summary.sh... done
[cluster] hot range summary script... writing binary output: debug/hot-ranges.sh... done
Loading

0 comments on commit c19b1e3

Please sign in to comment.