-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
settings: multi-tenant cluster settings tracking issue #73857
Comments
FYI this should be done in consultation with various product areas/eng teams so that they understand how to classify settings going forward |
Regarding the remaining identified steps, what skills/expertise are required:
This is probably SQL schema
SQL experience
SQL experience
Server
SQL experience cc @ajstorm |
This implements the tenant side code for setting overrides. Specifically, the tenant connector now implements the OverridesMonitor interface using the TenantSettings API. The server side of this API is not yet implemented, so this commit does not include end-to-end tests. Basic functionality is verified through a unit test that mocks the server-side API. Informs cockroachdb#73857. Release note: None
This implements the tenant side code for setting overrides. Specifically, the tenant connector now implements the OverridesMonitor interface using the TenantSettings API. The server side of this API is not yet implemented, so this commit does not include end-to-end tests. Basic functionality is verified through a unit test that mocks the server-side API. Informs cockroachdb#73857. Release note: None
75711: multitenant: listen for setting overrides r=RaduBerinde a=RaduBerinde #### settings: add EncodedValue proto, update tenant settings API This commit consolidates multiple uses of encoded setting values (raw value and type strings) into a `settings.EncodedValue` proto. The tenant settings roachpb API (not used yet) is updated to use this. Release note: None #### multitenant: listen for setting overrides This implements the tenant side code for setting overrides. Specifically, the tenant connector now implements the `OverridesMonitor` interface using the `TenantSettings` API. The server side of this API is not yet implemented, so this commit does not include end-to-end tests. Basic functionality is verified through a unit test that mocks the server-side API. Informs #73857. Release note: None Co-authored-by: Radu Berinde <[email protected]>
Fixes cockroachdb#70555. In order to limit the number of span configs a tenant's able to install, we introduce a tenant-side spanconfig.Limiter. It presents the following interface: // Limiter is used to limit the number of span configs installed by // secondary tenants. It considers the committed and uncommitted // state of a table descriptor and computes the "span" delta, each // unit we can apply a configuration over. It uses these deltas to // maintain an aggregate counter, informing the caller if exceeding // the configured limit. type Limiter interface { ShouldLimit( ctx context.Context, txn *kv.Txn, committed, uncommitted catalog.TableDescriptor, ) (bool, error) } This limiter only applies to secondary tenants. The counter is maintained in a newly introduced (tenant-only) system table, using the following schema: CREATE TABLE system.span_count ( singleton BOOL DEFAULT TRUE, span_count INT NOT NULL, CONSTRAINT "primary" PRIMARY KEY (singleton), CONSTRAINT single_row CHECK (singleton), FAMILY "primary" (singleton, span_count) ); We need just two integration points for spanconfig.Limiter: - Right above CheckTwoVersionInvariant, where we're able to hook into the committed and to-be-committed descriptor state before txn commit. - In the GC job, when gc-ing table state. We decrement a table's split count when GC-ing the table for good. The per-tenant span config limit used is controlled by a new tenant read-only cluster setting: spanconfig.tenant_limit. Multi-tenant cluster settings (cockroachdb#73857) provides the infrastructure for the host tenant to be able to control this setting cluster wide, or to target a specific tenant at a time. We also need a migration here, to start tracking span counts for clusters with pre-existing tenants. We introduce a migration that scans over all table descriptors and seeds system.span_count with the right value. Given cluster version gates disseminate asynchronously, we also need a preliminary version to start tracking incremental changes. It's useful to introduce the notion of debt. This will be handy if/when we lower per-tenant limits, and also in the migration above since it's possible for pre-existing tenants to have committed state in violation of the prescribed limit. When in debt, schema changes that add new splits will be rejected (dropping tables/indexes/partitions/etc. will work just fine). When attempting a txn that goes over the configured limit, the UX is as follows: > CREATE TABLE db.t2(i INT PRIMARY KEY); pq: exceeded limit for number of table spans Release note: None Release justification: low risk, high benefit change
Fixes cockroachdb#70555. In order to limit the number of span configs a tenant's able to install, we introduce a tenant-side spanconfig.Limiter. It presents the following interface: // Limiter is used to limit the number of span configs installed by // secondary tenants. It considers the committed and uncommitted // state of a table descriptor and computes the "span" delta, each // unit we can apply a configuration over. It uses these deltas to // maintain an aggregate counter, informing the caller if exceeding // the configured limit. type Limiter interface { ShouldLimit( ctx context.Context, txn *kv.Txn, committed, uncommitted catalog.TableDescriptor, ) (bool, error) } This limiter only applies to secondary tenants. The counter is maintained in a newly introduced (tenant-only) system table, using the following schema: CREATE TABLE system.span_count ( singleton BOOL DEFAULT TRUE, span_count INT NOT NULL, CONSTRAINT "primary" PRIMARY KEY (singleton), CONSTRAINT single_row CHECK (singleton), FAMILY "primary" (singleton, span_count) ); We need just two integration points for spanconfig.Limiter: - Right above CheckTwoVersionInvariant, where we're able to hook into the committed and to-be-committed descriptor state before txn commit. - In the GC job, when gc-ing table state. We decrement a table's split count when GC-ing the table for good. The per-tenant span config limit used is controlled by a new tenant read-only cluster setting: spanconfig.tenant_limit. Multi-tenant cluster settings (cockroachdb#73857) provides the infrastructure for the host tenant to be able to control this setting cluster wide, or to target a specific tenant at a time. We also need a migration here, to start tracking span counts for clusters with pre-existing tenants. We introduce a migration that scans over all table descriptors and seeds system.span_count with the right value. Given cluster version gates disseminate asynchronously, we also need a preliminary version to start tracking incremental changes. It's useful to introduce the notion of debt. This will be handy if/when we lower per-tenant limits, and also in the migration above since it's possible for pre-existing tenants to have committed state in violation of the prescribed limit. When in debt, schema changes that add new splits will be rejected (dropping tables/indexes/partitions/etc. will work just fine). When attempting a txn that goes over the configured limit, the UX is as follows: > CREATE TABLE db.t2(i INT PRIMARY KEY); pq: exceeded limit for number of table spans Release note: None Release justification: low risk, high benefit change
Fixes cockroachdb#70555. In order to limit the number of span configs a tenant's able to install, we introduce a tenant-side spanconfig.Limiter. It presents the following interface: // Limiter is used to limit the number of span configs installed by // secondary tenants. It considers the committed and uncommitted // state of a table descriptor and computes the "span" delta, each // unit we can apply a configuration over. It uses these deltas to // maintain an aggregate counter, informing the caller if exceeding // the configured limit. type Limiter interface { ShouldLimit( ctx context.Context, txn *kv.Txn, committed, uncommitted catalog.TableDescriptor, ) (bool, error) } This limiter only applies to secondary tenants. The counter is maintained in a newly introduced (tenant-only) system table, using the following schema: CREATE TABLE system.span_count ( singleton BOOL DEFAULT TRUE, span_count INT NOT NULL, CONSTRAINT "primary" PRIMARY KEY (singleton), CONSTRAINT single_row CHECK (singleton), FAMILY "primary" (singleton, span_count) ); We need just two integration points for spanconfig.Limiter: - Right above CheckTwoVersionInvariant, where we're able to hook into the committed and to-be-committed descriptor state before txn commit. - In the GC job, when gc-ing table state. We decrement a table's split count when GC-ing the table for good. The per-tenant span config limit used is controlled by a new tenant read-only cluster setting: spanconfig.tenant_limit. Multi-tenant cluster settings (cockroachdb#73857) provides the infrastructure for the host tenant to be able to control this setting cluster wide, or to target a specific tenant at a time. We also need a migration here, to start tracking span counts for clusters with pre-existing tenants. We introduce a migration that scans over all table descriptors and seeds system.span_count with the right value. Given cluster version gates disseminate asynchronously, we also need a preliminary version to start tracking incremental changes. It's useful to introduce the notion of debt. This will be handy if/when we lower per-tenant limits, and also in the migration above since it's possible for pre-existing tenants to have committed state in violation of the prescribed limit. When in debt, schema changes that add new splits will be rejected (dropping tables/indexes/partitions/etc. will work just fine). When attempting a txn that goes over the configured limit, the UX is as follows: > CREATE TABLE db.t2(i INT PRIMARY KEY); pq: exceeded limit for number of table spans Release note: None Release justification: low risk, high benefit change
Fixes cockroachdb#70555. In order to limit the number of span configs a tenant's able to install, we introduce a tenant-side spanconfig.Limiter. It presents the following interface: // Limiter is used to limit the number of span configs installed by // secondary tenants. It takes in a delta (typically the difference // in span configs between the committed and uncommitted state in // the txn), uses it to maintain an aggregate counter, and informs // the caller if exceeding the prescribed limit. type Limiter interface { ShouldLimit( ctx context.Context, txn *kv.Txn, delta int, ) (bool, error) } The delta is computed using a static helper, spanconfig.Delta: // Delta considers both the committed and uncommitted state of a // table descriptor and computes the difference in the number of // spans we can apply a configuration over. func Delta( ctx context.Context, s Splitter, committed, uncommitted catalog.TableDescriptor, ) (int, error) This limiter only applies to secondary tenants. The counter is maintained in a newly introduced (tenant-only) system table, using the following schema: CREATE TABLE system.span_count ( singleton BOOL DEFAULT TRUE, span_count INT NOT NULL, CONSTRAINT "primary" PRIMARY KEY (singleton), CONSTRAINT single_row CHECK (singleton), FAMILY "primary" (singleton, span_count) ); We need just two integration points for spanconfig.Limiter: - Right above CheckTwoVersionInvariant, where we're able to hook into the committed and to-be-committed descriptor state before txn commit; - In the GC job, when gc-ing table state. We decrement a table's split count when GC-ing the table for good. The per-tenant span config limit used is controlled by a new tenant read-only cluster setting: spanconfig.tenant_limit. Multi-tenant cluster settings (cockroachdb#73857) provides the infrastructure for the host tenant to be able to control this setting cluster wide, or to target a specific tenant at a time. We also need a migration here, to start tracking span counts for clusters with pre-existing tenants. We introduce a migration that scans over all table descriptors and seeds system.span_count with the right value. Given cluster version gates disseminate asynchronously, we also need a preliminary version to start tracking incremental changes. It's useful to introduce the notion of debt. This will be handy if/when we lower per-tenant limits, and also in the migration above since it's possible for pre-existing tenants to have committed state in violation of the prescribed limit. When in debt, schema changes that add new splits will be rejected (dropping tables/indexes/partitions/etc. will work just fine). When attempting a txn that goes over the configured limit, the UX is as follows: > CREATE TABLE db.t42(i INT PRIMARY KEY); pq: exceeded limit for number of table spans Release note: None Release justification: low risk, high benefit change Release note: None
Fixes cockroachdb#70555. In order to limit the number of span configs a tenant's able to install, we introduce a tenant-side spanconfig.Limiter. It presents the following interface: // Limiter is used to limit the number of span configs installed by // secondary tenants. It takes in a delta (typically the difference // in span configs between the committed and uncommitted state in // the txn), uses it to maintain an aggregate counter, and informs // the caller if exceeding the prescribed limit. type Limiter interface { ShouldLimit( ctx context.Context, txn *kv.Txn, delta int, ) (bool, error) } The delta is computed using a static helper, spanconfig.Delta: // Delta considers both the committed and uncommitted state of a // table descriptor and computes the difference in the number of // spans we can apply a configuration over. func Delta( ctx context.Context, s Splitter, committed, uncommitted catalog.TableDescriptor, ) (int, error) This limiter only applies to secondary tenants. The counter is maintained in a newly introduced (tenant-only) system table, using the following schema: CREATE TABLE system.span_count ( singleton BOOL DEFAULT TRUE, span_count INT NOT NULL, CONSTRAINT "primary" PRIMARY KEY (singleton), CONSTRAINT single_row CHECK (singleton), FAMILY "primary" (singleton, span_count) ); We need just two integration points for spanconfig.Limiter: - Right above CheckTwoVersionInvariant, where we're able to hook into the committed and to-be-committed descriptor state before txn commit; - In the GC job, when gc-ing table state. We decrement a table's split count when GC-ing the table for good. The per-tenant span config limit used is controlled by a new tenant read-only cluster setting: spanconfig.tenant_limit. Multi-tenant cluster settings (cockroachdb#73857) provides the infrastructure for the host tenant to be able to control this setting cluster wide, or to target a specific tenant at a time. We also need a migration here, to start tracking span counts for clusters with pre-existing tenants. We introduce a migration that scans over all table descriptors and seeds system.span_count with the right value. Given cluster version gates disseminate asynchronously, we also need a preliminary version to start tracking incremental changes. It's useful to introduce the notion of debt. This will be handy if/when we lower per-tenant limits, and also in the migration above since it's possible for pre-existing tenants to have committed state in violation of the prescribed limit. When in debt, schema changes that add new splits will be rejected (dropping tables/indexes/partitions/etc. will work just fine). When attempting a txn that goes over the configured limit, the UX is as follows: > CREATE TABLE db.t42(i INT PRIMARY KEY); pq: exceeded limit for number of table spans Release note: None Release justification: low risk, high benefit change Release note: None
Fixes cockroachdb#70555. In order to limit the number of span configs a tenant's able to install, we introduce a tenant-side spanconfig.Limiter. It presents the following interface: // Limiter is used to limit the number of span configs installed by // secondary tenants. It takes in a delta (typically the difference // in span configs between the committed and uncommitted state in // the txn), uses it to maintain an aggregate counter, and informs // the caller if exceeding the prescribed limit. type Limiter interface { ShouldLimit( ctx context.Context, txn *kv.Txn, delta int, ) (bool, error) } The delta is computed using a static helper, spanconfig.Delta: // Delta considers both the committed and uncommitted state of a // table descriptor and computes the difference in the number of // spans we can apply a configuration over. func Delta( ctx context.Context, s Splitter, committed, uncommitted catalog.TableDescriptor, ) (int, error) This limiter only applies to secondary tenants. The counter is maintained in a newly introduced (tenant-only) system table, using the following schema: CREATE TABLE system.span_count ( singleton BOOL DEFAULT TRUE, span_count INT NOT NULL, CONSTRAINT "primary" PRIMARY KEY (singleton), CONSTRAINT single_row CHECK (singleton), FAMILY "primary" (singleton, span_count) ); We need just two integration points for spanconfig.Limiter: - Right above CheckTwoVersionInvariant, where we're able to hook into the committed and to-be-committed descriptor state before txn commit; - In the GC job, when gc-ing table state. We decrement a table's split count when GC-ing the table for good. The per-tenant span config limit used is controlled by a new tenant read-only cluster setting: spanconfig.tenant_limit. Multi-tenant cluster settings (cockroachdb#73857) provides the infrastructure for the host tenant to be able to control this setting cluster wide, or to target a specific tenant at a time. We also need a migration here, to start tracking span counts for clusters with pre-existing tenants. We introduce a migration that scans over all table descriptors and seeds system.span_count with the right value. Given cluster version gates disseminate asynchronously, we also need a preliminary version to start tracking incremental changes. It's useful to introduce the notion of debt. This will be handy if/when we lower per-tenant limits, and also in the migration above since it's possible for pre-existing tenants to have committed state in violation of the prescribed limit. When in debt, schema changes that add new splits will be rejected (dropping tables/indexes/partitions/etc. will work just fine). When attempting a txn that goes over the configured limit, the UX is as follows: > CREATE TABLE db.t42(i INT PRIMARY KEY); pq: exceeded limit for number of table spans Release note: None Release justification: low risk, high benefit change Release note: None
77337: spanconfig: limit # of tenant span configs r=irfansharif a=irfansharif Fixes #70555. In order to limit the number of span configs a tenant's able to install, we introduce a tenant-side spanconfig.Limiter. It presents the following interface: // Limiter is used to limit the number of span configs installed by // secondary tenants. It considers the committed and uncommitted // state of a table descriptor and computes the "span" delta, each // unit we can apply a configuration over. It uses these deltas to // maintain an aggregate counter, informing the caller if exceeding // the configured limit. type Limiter interface { ShouldLimit( ctx context.Context, txn *kv.Txn, committed, uncommitted catalog.TableDescriptor, ) (bool, error) } This limiter only applies to secondary tenants. The counter is maintained in a newly introduced (tenant-only) system table, using the following schema: CREATE TABLE system.span_count ( singleton BOOL DEFAULT TRUE, span_count INT NOT NULL, CONSTRAINT "primary" PRIMARY KEY (singleton), CONSTRAINT single_row CHECK (singleton), FAMILY "primary" (singleton, span_count) ); We need just two integration points for spanconfig.Limiter: - Right above CheckTwoVersionInvariant, where we're able to hook into the committed and to-be-committed descriptor state before txn commit. - In the GC job, when gc-ing table state. We decrement a table's split count when GC-ing the table for good. The per-tenant span config limit used is controlled by a new tenant read-only cluster setting: spanconfig.tenant_limit. Multi-tenant cluster settings (#73857) provides the infrastructure for the host tenant to be able to control this setting cluster wide, or to target a specific tenant at a time. We also need a migration here, to start tracking span counts for clusters with pre-existing tenants. We introduce a migration that scans over all table descriptors and seeds system.span_count with the right value. Given cluster version gates disseminate asynchronously, we also need a preliminary version to start tracking incremental changes. It's useful to introduce the notion of debt. This will be handy if/when we lower per-tenant limits, and also in the migration above since it's possible for pre-existing tenants to have committed state in violation of the prescribed limit. When in debt, schema changes that add new splits will be rejected (dropping tables/indexes/partitions/etc. will work just fine). When attempting a txn that goes over the configured limit, the UX is as follows: > CREATE TABLE db.t2(i INT PRIMARY KEY); pq: exceeded limit for number of table spans Release note: None Release justification: low risk, high benefit change 79462: colexecproj: break it down into two packages r=yuzefovich a=yuzefovich **colexecproj: split up default cmp proj op file into two** This commit splits up a single file containing two default comparison projection operators into two files. This is done in preparation of the following commit (which will move one of the operators to a different package). Release note: None **colexecproj: extract a new package for projection ops with const** This commit extracts a new `colexecprojconst` package out of `colexecproj` that contains all projection operators with one constant argument. This will allow for faster build speeds since both packages tens of thousands lines of code. Special care had to be taken for default comparison operator because we need to generate two files in different packages based on a single template. I followed the precedent of `sort_partitioner.eg.go` which had to do the same. Addresses: #79357. Release note: None Co-authored-by: irfan sharif <[email protected]> Co-authored-by: Yahor Yuzefovich <[email protected]>
Fixes #70555. In order to limit the number of span configs a tenant's able to install, we introduce a tenant-side spanconfig.Limiter. It presents the following interface: // Limiter is used to limit the number of span configs installed by // secondary tenants. It takes in a delta (typically the difference // in span configs between the committed and uncommitted state in // the txn), uses it to maintain an aggregate counter, and informs // the caller if exceeding the prescribed limit. type Limiter interface { ShouldLimit( ctx context.Context, txn *kv.Txn, delta int, ) (bool, error) } The delta is computed using a static helper, spanconfig.Delta: // Delta considers both the committed and uncommitted state of a // table descriptor and computes the difference in the number of // spans we can apply a configuration over. func Delta( ctx context.Context, s Splitter, committed, uncommitted catalog.TableDescriptor, ) (int, error) This limiter only applies to secondary tenants. The counter is maintained in a newly introduced (tenant-only) system table, using the following schema: CREATE TABLE system.span_count ( singleton BOOL DEFAULT TRUE, span_count INT NOT NULL, CONSTRAINT "primary" PRIMARY KEY (singleton), CONSTRAINT single_row CHECK (singleton), FAMILY "primary" (singleton, span_count) ); We need just two integration points for spanconfig.Limiter: - Right above CheckTwoVersionInvariant, where we're able to hook into the committed and to-be-committed descriptor state before txn commit; - In the GC job, when gc-ing table state. We decrement a table's split count when GC-ing the table for good. The per-tenant span config limit used is controlled by a new tenant read-only cluster setting: spanconfig.tenant_limit. Multi-tenant cluster settings (#73857) provides the infrastructure for the host tenant to be able to control this setting cluster wide, or to target a specific tenant at a time. We also need a migration here, to start tracking span counts for clusters with pre-existing tenants. We introduce a migration that scans over all table descriptors and seeds system.span_count with the right value. Given cluster version gates disseminate asynchronously, we also need a preliminary version to start tracking incremental changes. It's useful to introduce the notion of debt. This will be handy if/when we lower per-tenant limits, and also in the migration above since it's possible for pre-existing tenants to have committed state in violation of the prescribed limit. When in debt, schema changes that add new splits will be rejected (dropping tables/indexes/partitions/etc. will work just fine). When attempting a txn that goes over the configured limit, the UX is as follows: > CREATE TABLE db.t42(i INT PRIMARY KEY); pq: exceeded limit for number of table spans Release note: None Release justification: low risk, high benefit change Release note: None
We have completed the work on this, especially through |
This issue tracks the implementation of multi-tenant cluster settings, as described in the RFC:
system.tenant_settings
table (settings: introduce system.tenant_settings table #76313)SettingsWatcher
) (tenantsettingswatcher: implement watcher and integrate into server #76445)TenantReadOnly
semantics (tenantcostcontrol: change settings to TenantReadOnly #76680)TenantReadOnly
(partially done)Nice to haves:
SHOW CLUSTER SETTING
(on the tenant) show when the setting is an overridekv.*
settings are not SystemOnly) *: classify existing cluster settings depending on multi-tenant usage #77472Epic: CRDB-6671
Jira issue: CRDB-11785
The text was updated successfully, but these errors were encountered: