Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sql: allow secondary tenants to set/show zone configurations #69169

Merged
merged 3 commits into from
Aug 20, 2021

Conversation

arulajmani
Copy link
Collaborator

@arulajmani arulajmani commented Aug 19, 2021

Part of #67679. The zone configurations themselves have no effect right now. See individual commits for details.

@arulajmani arulajmani requested review from a team as code owners August 19, 2021 18:54
@arulajmani arulajmani requested a review from a team August 19, 2021 18:54
@cockroach-teamcity
Copy link
Member

This change is Reviewable

@arulajmani arulajmani force-pushed the secondary-tenant-system-zones branch 2 times, most recently from b3faf34 to 58a9407 Compare August 19, 2021 19:51
@arulajmani arulajmani force-pushed the secondary-tenant-system-zones branch from 58a9407 to 345d00c Compare August 19, 2021 20:28
Copy link
Contributor

@irfansharif irfansharif left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a bit unclear on how we're expecting to feature gate this. Everything else LGTM; deferring to Andrew's rubber stamp.

Reviewed 11 of 11 files at r1, 27 of 27 files at r2, all commit messages.
Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @ajwerner, @arulajmani, and @RichardJCai)


pkg/migration/migrations/migrations.go, line 125 at r1 (raw file):

	),
	migration.NewTenantMigration(
		"add system.zones table",

nit: add "for secondary tenants"


pkg/sql/tenant.go, line 160 at r1 (raw file):

	schema := bootstrap.MakeMetadataSchema(
		codec,
		p.ExtendedEvalContext().ExecCfg.DefaultZoneConfig, /* defaultZoneConfig */

Curious to hear others' thoughts on what the tenant's range default should be. This seems fine but is static. If we want to be able to configure new tenant's RANGE DEFAULT, I thought it could be fun to overload RANGE TENANT for it. Worth a TODO perhaps?


pkg/sql/zone_config.go, line 425 at r2 (raw file):

// TODO(arul): All usages of this function can be removed in 22.1 as then all
// all secondary tenants would have `system.zones`.
func ZonesTableExists(ctx context.Context, codec keys.SQLCodec, settings *cluster.Settings) bool {

Though convenient, I think the signature hides away the fact that we're making a version check. Don't we also want to add a hidden feature flag that's disabled by default? We could do that check here too; just name this differently and adjust the surrounding comments at the callsites.


pkg/sql/catalog/bootstrap/metadata.go, line 285 at r1 (raw file):

// CreateZoneConfigKV creates a kv pair for the zone config for the given key
// and config value.
func CreateZoneConfigKV(

Does this need to be exported?


pkg/sql/catalog/bootstrap/metadata.go, line 310 at r1 (raw file):

		CreateZoneConfigKV(keys.RootNamespaceID, codec, defaultZoneConfig))

	if codec.ForSystemTenant() {

Reduce a level of indentation for everything below by returning early if we're not the system tenant.


pkg/sql/gcjob/descriptor_utils.go, line 68 at r2 (raw file):

		return nil
	}
	if !sql.ZonesTableExists(ctx, codec, settings) {

For everywhere we do this check, to feature gate this correctly I think we want some way to disable secondary tenants to muck about with zone configs unless a one-way hidden experimental flag/cluster setting was toggled. With this PR as is, tenants are able to write set zone configs which will confusingly never get acted on. I don't know if that's what we want.


pkg/sql/logictest/testdata/logic_test/zone_config, line 1 at r2 (raw file):

# LogicTest: !3node-tenant(49854)

Sweet!


pkg/sql/logictest/testdata/logic_test/zone_config_system_tenant, line 8 at r2 (raw file):


# TODO(arul): move this back to `zone_config` once we validate cluster regions
# for tenants.

Want to add cluster region validation to #67679 to not lose track?

Copy link
Collaborator Author

@arulajmani arulajmani left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @ajwerner, @arulajmani, @irfansharif, and @RichardJCai)


pkg/sql/zone_config.go, line 425 at r2 (raw file):

Previously, irfansharif (irfan sharif) wrote…

Though convenient, I think the signature hides away the fact that we're making a version check. Don't we also want to add a hidden feature flag that's disabled by default? We could do that check here too; just name this differently and adjust the surrounding comments at the callsites.

Do we want to bundle the hidden feature flag as part of this method or do we want it to be a bit more targeted? Consider the case where a tenant has laid down some zone configs before flipping this cluster setting. We probably want to GC them and allow for introspection, right?

I was thinking that the only place this cluster setting would be consulted is when setting new zone configs. All other call sites should really just care about if the zones table exists or not. Wdyt?


pkg/sql/catalog/bootstrap/metadata.go, line 285 at r1 (raw file):

Previously, irfansharif (irfan sharif) wrote…

Does this need to be exported?

Not anymore, good catch!


pkg/sql/gcjob/descriptor_utils.go, line 68 at r2 (raw file):

Previously, irfansharif (irfan sharif) wrote…

For everywhere we do this check, to feature gate this correctly I think we want some way to disable secondary tenants to muck about with zone configs unless a one-way hidden experimental flag/cluster setting was toggled. With this PR as is, tenants are able to write set zone configs which will confusingly never get acted on. I don't know if that's what we want.

In this particular instance we probably want to delete the zone config regardless of the cluster setting though, right? Otherwise we'll leave orphaned entries in system.zones. See my comment above, I think we probably just want to gate new zone config writes behind that cluster setting instead of every place we do this check.

Copy link
Contributor

@ajwerner ajwerner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm:

Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained (waiting on @arulajmani, @irfansharif, and @RichardJCai)


pkg/migration/migrations/zones.go, line 27 at r2 (raw file):

// zonesTableForSecondaryTenants adds system.zones to secondary tenants and
// seeds it.
func zonesTableForSecondaryTenants(

if I were being really nitpicky I'd ask for a test of this


pkg/sql/tenant.go, line 160 at r1 (raw file):

Previously, irfansharif (irfan sharif) wrote…

Curious to hear others' thoughts on what the tenant's range default should be. This seems fine but is static. If we want to be able to configure new tenant's RANGE DEFAULT, I thought it could be fun to overload RANGE TENANT for it. Worth a TODO perhaps?

I think configuring this warrants an issue in github.


pkg/sql/zone_config.go, line 425 at r2 (raw file):

Previously, arulajmani (Arul Ajmani) wrote…

Do we want to bundle the hidden feature flag as part of this method or do we want it to be a bit more targeted? Consider the case where a tenant has laid down some zone configs before flipping this cluster setting. We probably want to GC them and allow for introspection, right?

I was thinking that the only place this cluster setting would be consulted is when setting new zone configs. All other call sites should really just care about if the zones table exists or not. Wdyt?

You can just pass in the clusterversion.Handle if you wanted.

I don't feel strongly on whether this is the spot that also reads a testing knob. I haven't gotten to such a knob yet or where this is called.


pkg/sql/gcjob/descriptor_utils.go, line 68 at r2 (raw file):

I think we probably just want to gate new zone config writes behind that cluster setting instead of every place we do this check.
👍

This patch bootstraps `system.zones` for secondary tenants and adds a
migration for existing tenants. We also seed secondary tenants with the
RANGE DEFAULT zone configuration.

Release note: None
This patch removes all checks that blocked secondary tenants from
writing to and reading from system.zones and replaces them with
checks to ensure `system.zones` exists. This allows us to run our
logic test suite for zone configurations in multi-tenant mode as well.

Currently, we don't validate node localities for secondary tenants when
setting zone configurations. A followup patch will address this.

Release note: None
@arulajmani arulajmani force-pushed the secondary-tenant-system-zones branch from 345d00c to 84568b9 Compare August 20, 2021 20:09
@blathers-crl blathers-crl bot requested a review from irfansharif August 20, 2021 20:09
Copy link
Collaborator Author

@arulajmani arulajmani left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a third commit that adds a cluster setting to gate secondary tenants from writing to system.zones. It's off by default, has no effect for the system tenant, and does not affect already existing zone configs -- only new ones.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (and 1 stale) (waiting on @arulajmani, @irfansharif, and @RichardJCai)


pkg/sql/zone_config.go, line 425 at r2 (raw file):

Previously, ajwerner wrote…

You can just pass in the clusterversion.Handle if you wanted.

I don't feel strongly on whether this is the spot that also reads a testing knob. I haven't gotten to such a knob yet or where this is called.

Done.

For completeness, it isn't a testing knob but a cluster setting. Wasn't there when you reviewed this, but I've tacked on a commit here that adds the cluster setting and checks for it when setting zone configurations for secondary tenants.


pkg/sql/catalog/bootstrap/metadata.go, line 310 at r1 (raw file):

Previously, irfansharif (irfan sharif) wrote…

Reduce a level of indentation for everything below by returning early if we're not the system tenant.

Done.


pkg/sql/logictest/testdata/logic_test/zone_config_system_tenant, line 8 at r2 (raw file):

Previously, irfansharif (irfan sharif) wrote…

Want to add cluster region validation to #67679 to not lose track?

Filed: #69199 and added it to the tracking issue.

pkg/spanconfig/spanconfigmanager/manager.go Outdated Show resolved Hide resolved
if !p.ExecCfg().Codec.ForSystemTenant() &&
!secondaryTenantZoneConfigsEnabled.Get(&p.ExecCfg().Settings.SV) {
return nil, pgerror.Newf(pgcode.FeatureNotSupported,
"secondary tenants cannot set zone configurations unless %s is enabled",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this is an experimental cluster setting we want to hide from users, it's fine to not refer to the setting name. "zone configs disabled for multi-tenancy".

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, I'll just re-use the "unsupported with multi-tenancy" here.

pkg/sql/zone_config.go Outdated Show resolved Hide resolved
This patch gates setting zone configurations behind a the
`sql.zone_configs.allow_for_secondary_tenant.enabled` cluster setting
for secondary tenants. For now, this thing is not public.

Release note: None
@arulajmani arulajmani force-pushed the secondary-tenant-system-zones branch from 84568b9 to e3be882 Compare August 20, 2021 20:59
@arulajmani
Copy link
Collaborator Author

TFTRs!

bors r=irfansharif,ajwerner

@craig
Copy link
Contributor

craig bot commented Aug 20, 2021

Build succeeded:

@craig craig bot merged commit 466e292 into cockroachdb:master Aug 20, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants