Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

settings: multi-tenant cluster settings tracking issue #73857

Closed
14 of 16 tasks
RaduBerinde opened this issue Dec 15, 2021 · 3 comments
Closed
14 of 16 tasks

settings: multi-tenant cluster settings tracking issue #73857

RaduBerinde opened this issue Dec 15, 2021 · 3 comments
Labels
C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. C-enhancement Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception) T-multitenant Issues owned by the multi-tenant virtual team X-anchored-telemetry The issue number is anchored by telemetry references.

Comments

@RaduBerinde
Copy link
Member

RaduBerinde commented Dec 15, 2021

This issue tracks the implementation of multi-tenant cluster settings, as described in the RFC:

Nice to haves:

Epic: CRDB-6671

Jira issue: CRDB-11785

@RaduBerinde RaduBerinde added the C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. label Dec 15, 2021
@blathers-crl blathers-crl bot added the T-server-and-security DB Server & Security label Dec 15, 2021
@RaduBerinde RaduBerinde self-assigned this Dec 15, 2021
@vy-ton
Copy link
Contributor

vy-ton commented Jan 7, 2022

  • audit all non-system-only settings and find the set that needs to be moved to system (e.g. most kv.* settings are not SystemOnly)

FYI this should be done in consultation with various product areas/eng teams so that they understand how to classify settings going forward

@knz
Copy link
Contributor

knz commented Jan 12, 2022

Regarding the remaining identified steps, what skills/expertise are required:

add the new system.tenant_settings table and implement the host side of the API above using a range feed on the tenant settings table (similar to SettingsWatcher)

This is probably SQL schema

implement the tenant side code that uses the API

SQL experience

implement the new statements in the RFC

SQL experience

audit all non-system-only settings and find the set that needs to be moved to system (e.g. most kv.* settings are not SystemOnly)
add tenant setting overrides to tenant backups

Server

add infrastructure for consistently retrieving the current value along with the information whether this is an override (one idea in this comment); make SHOW CLUSTER SETTING (on the tenant) show when the setting is an override; also disallow SET CLUSTER SETTING while an override is in effect.

SQL experience

cc @ajstorm

RaduBerinde added a commit to RaduBerinde/cockroach that referenced this issue Feb 2, 2022
This implements the tenant side code for setting overrides.
Specifically, the tenant connector now implements the
OverridesMonitor interface using the TenantSettings API.

The server side of this API is not yet implemented, so this commit
does not include end-to-end tests. Basic functionality is verified
through a unit test that mocks the server-side API.

Informs cockroachdb#73857.

Release note: None
RaduBerinde added a commit to RaduBerinde/cockroach that referenced this issue Feb 8, 2022
This implements the tenant side code for setting overrides.
Specifically, the tenant connector now implements the
OverridesMonitor interface using the TenantSettings API.

The server side of this API is not yet implemented, so this commit
does not include end-to-end tests. Basic functionality is verified
through a unit test that mocks the server-side API.

Informs cockroachdb#73857.

Release note: None
craig bot pushed a commit that referenced this issue Feb 9, 2022
75711: multitenant: listen for setting overrides r=RaduBerinde a=RaduBerinde

#### settings: add EncodedValue proto, update tenant settings API

This commit consolidates multiple uses of encoded setting values (raw
value and type strings) into a `settings.EncodedValue` proto.

The tenant settings roachpb API (not used yet) is updated to use this.

Release note: None

#### multitenant: listen for setting overrides

This implements the tenant side code for setting overrides.
Specifically, the tenant connector now implements the
`OverridesMonitor` interface using the `TenantSettings` API.

The server side of this API is not yet implemented, so this commit
does not include end-to-end tests. Basic functionality is verified
through a unit test that mocks the server-side API.

Informs #73857.

Release note: None

Co-authored-by: Radu Berinde <[email protected]>
irfansharif added a commit to irfansharif/cockroach that referenced this issue Mar 11, 2022
Fixes cockroachdb#70555. In order to limit the number of span configs a tenant's
able to install, we introduce a tenant-side spanconfig.Limiter. It
presents the following interface:

    // Limiter is used to limit the number of span configs installed by
    // secondary tenants. It considers the committed and uncommitted
    // state of a table descriptor and computes the "span" delta, each
    // unit we can apply a configuration over. It uses these deltas to
    // maintain an aggregate counter, informing the caller if exceeding
    // the configured limit.
    type Limiter interface {
      ShouldLimit(
        ctx context.Context, txn *kv.Txn,
        committed, uncommitted catalog.TableDescriptor,
      ) (bool, error)
    }

This limiter only applies to secondary tenants. The counter is
maintained in a newly introduced (tenant-only) system table, using the
following schema:

    CREATE TABLE system.span_count (
      singleton  BOOL DEFAULT TRUE,
      span_count INT NOT NULL,
      CONSTRAINT "primary" PRIMARY KEY (singleton),
      CONSTRAINT single_row CHECK (singleton),
      FAMILY "primary" (singleton, span_count)
    );

We need just two integration points for spanconfig.Limiter:
- Right above CheckTwoVersionInvariant, where we're able to hook into
  the committed and to-be-committed descriptor state before txn commit.
- In the GC job, when gc-ing table state. We decrement a table's split
  count when GC-ing the table for good.

The per-tenant span config limit used is controlled by a new tenant
read-only cluster setting: spanconfig.tenant_limit. Multi-tenant cluster
settings (cockroachdb#73857) provides the infrastructure for the host tenant to be
able to control this setting cluster wide, or to target a specific
tenant at a time.

We also need a migration here, to start tracking span counts for
clusters with pre-existing tenants. We introduce a migration that scans
over all table descriptors and seeds system.span_count with the right
value. Given cluster version gates disseminate asynchronously, we also
need a preliminary version to start tracking incremental changes.

It's useful to introduce the notion of debt. This will be handy if/when
we lower per-tenant limits, and also in the migration above since it's
possible for pre-existing tenants to have committed state in violation
of the prescribed limit. When in debt, schema changes that add new
splits will be rejected (dropping tables/indexes/partitions/etc. will
work just fine).

When attempting a txn that goes over the configured limit, the UX is as
follows:

    > CREATE TABLE db.t2(i INT PRIMARY KEY);
    pq: exceeded limit for number of table spans

Release note: None
Release justification: low risk, high benefit change
@knz knz added the X-anchored-telemetry The issue number is anchored by telemetry references. label Mar 13, 2022
irfansharif added a commit to irfansharif/cockroach that referenced this issue Apr 1, 2022
Fixes cockroachdb#70555. In order to limit the number of span configs a tenant's
able to install, we introduce a tenant-side spanconfig.Limiter. It
presents the following interface:

    // Limiter is used to limit the number of span configs installed by
    // secondary tenants. It considers the committed and uncommitted
    // state of a table descriptor and computes the "span" delta, each
    // unit we can apply a configuration over. It uses these deltas to
    // maintain an aggregate counter, informing the caller if exceeding
    // the configured limit.
    type Limiter interface {
      ShouldLimit(
        ctx context.Context, txn *kv.Txn,
        committed, uncommitted catalog.TableDescriptor,
      ) (bool, error)
    }

This limiter only applies to secondary tenants. The counter is
maintained in a newly introduced (tenant-only) system table, using the
following schema:

    CREATE TABLE system.span_count (
      singleton  BOOL DEFAULT TRUE,
      span_count INT NOT NULL,
      CONSTRAINT "primary" PRIMARY KEY (singleton),
      CONSTRAINT single_row CHECK (singleton),
      FAMILY "primary" (singleton, span_count)
    );

We need just two integration points for spanconfig.Limiter:
- Right above CheckTwoVersionInvariant, where we're able to hook into
  the committed and to-be-committed descriptor state before txn commit.
- In the GC job, when gc-ing table state. We decrement a table's split
  count when GC-ing the table for good.

The per-tenant span config limit used is controlled by a new tenant
read-only cluster setting: spanconfig.tenant_limit. Multi-tenant cluster
settings (cockroachdb#73857) provides the infrastructure for the host tenant to be
able to control this setting cluster wide, or to target a specific
tenant at a time.

We also need a migration here, to start tracking span counts for
clusters with pre-existing tenants. We introduce a migration that scans
over all table descriptors and seeds system.span_count with the right
value. Given cluster version gates disseminate asynchronously, we also
need a preliminary version to start tracking incremental changes.

It's useful to introduce the notion of debt. This will be handy if/when
we lower per-tenant limits, and also in the migration above since it's
possible for pre-existing tenants to have committed state in violation
of the prescribed limit. When in debt, schema changes that add new
splits will be rejected (dropping tables/indexes/partitions/etc. will
work just fine).

When attempting a txn that goes over the configured limit, the UX is as
follows:

    > CREATE TABLE db.t2(i INT PRIMARY KEY);
    pq: exceeded limit for number of table spans

Release note: None
Release justification: low risk, high benefit change
irfansharif added a commit to irfansharif/cockroach that referenced this issue Apr 1, 2022
Fixes cockroachdb#70555. In order to limit the number of span configs a tenant's
able to install, we introduce a tenant-side spanconfig.Limiter. It
presents the following interface:

    // Limiter is used to limit the number of span configs installed by
    // secondary tenants. It considers the committed and uncommitted
    // state of a table descriptor and computes the "span" delta, each
    // unit we can apply a configuration over. It uses these deltas to
    // maintain an aggregate counter, informing the caller if exceeding
    // the configured limit.
    type Limiter interface {
      ShouldLimit(
        ctx context.Context, txn *kv.Txn,
        committed, uncommitted catalog.TableDescriptor,
      ) (bool, error)
    }

This limiter only applies to secondary tenants. The counter is
maintained in a newly introduced (tenant-only) system table, using the
following schema:

    CREATE TABLE system.span_count (
      singleton  BOOL DEFAULT TRUE,
      span_count INT NOT NULL,
      CONSTRAINT "primary" PRIMARY KEY (singleton),
      CONSTRAINT single_row CHECK (singleton),
      FAMILY "primary" (singleton, span_count)
    );

We need just two integration points for spanconfig.Limiter:
- Right above CheckTwoVersionInvariant, where we're able to hook into
  the committed and to-be-committed descriptor state before txn commit.
- In the GC job, when gc-ing table state. We decrement a table's split
  count when GC-ing the table for good.

The per-tenant span config limit used is controlled by a new tenant
read-only cluster setting: spanconfig.tenant_limit. Multi-tenant cluster
settings (cockroachdb#73857) provides the infrastructure for the host tenant to be
able to control this setting cluster wide, or to target a specific
tenant at a time.

We also need a migration here, to start tracking span counts for
clusters with pre-existing tenants. We introduce a migration that scans
over all table descriptors and seeds system.span_count with the right
value. Given cluster version gates disseminate asynchronously, we also
need a preliminary version to start tracking incremental changes.

It's useful to introduce the notion of debt. This will be handy if/when
we lower per-tenant limits, and also in the migration above since it's
possible for pre-existing tenants to have committed state in violation
of the prescribed limit. When in debt, schema changes that add new
splits will be rejected (dropping tables/indexes/partitions/etc. will
work just fine).

When attempting a txn that goes over the configured limit, the UX is as
follows:

    > CREATE TABLE db.t2(i INT PRIMARY KEY);
    pq: exceeded limit for number of table spans

Release note: None
Release justification: low risk, high benefit change
irfansharif added a commit to irfansharif/cockroach that referenced this issue Apr 4, 2022
Fixes cockroachdb#70555. In order to limit the number of span configs a tenant's
able to install, we introduce a tenant-side spanconfig.Limiter. It
presents the following interface:

    // Limiter is used to limit the number of span configs installed by
    // secondary tenants. It takes in a delta (typically the difference
    // in span configs between the committed and uncommitted state in
    // the txn), uses it to maintain an aggregate counter, and informs
    // the caller if exceeding the prescribed limit.
    type Limiter interface {
      ShouldLimit(
        ctx context.Context, txn *kv.Txn, delta int,
      ) (bool, error)
    }

The delta is computed using a static helper, spanconfig.Delta:

    // Delta considers both the committed and uncommitted state of a
    // table descriptor and computes the difference in the number of
    // spans we can apply a configuration over.
    func Delta(
      ctx context.Context, s Splitter,
      committed, uncommitted catalog.TableDescriptor,
    ) (int, error)

This limiter only applies to secondary tenants. The counter is
maintained in a newly introduced (tenant-only) system table, using the
following schema:

    CREATE TABLE system.span_count (
      singleton  BOOL DEFAULT TRUE,
      span_count INT NOT NULL,
      CONSTRAINT "primary" PRIMARY KEY (singleton),
      CONSTRAINT single_row CHECK (singleton),
      FAMILY "primary" (singleton, span_count)
    );

We need just two integration points for spanconfig.Limiter:
- Right above CheckTwoVersionInvariant, where we're able to hook into
  the committed and to-be-committed descriptor state before txn commit;
- In the GC job, when gc-ing table state. We decrement a table's split
  count when GC-ing the table for good.

The per-tenant span config limit used is controlled by a new tenant
read-only cluster setting: spanconfig.tenant_limit. Multi-tenant cluster
settings (cockroachdb#73857) provides the infrastructure for the host tenant to be
able to control this setting cluster wide, or to target a specific
tenant at a time.

We also need a migration here, to start tracking span counts for
clusters with pre-existing tenants. We introduce a migration that scans
over all table descriptors and seeds system.span_count with the right
value. Given cluster version gates disseminate asynchronously, we also
need a preliminary version to start tracking incremental changes.

It's useful to introduce the notion of debt. This will be handy if/when
we lower per-tenant limits, and also in the migration above since it's
possible for pre-existing tenants to have committed state in violation
of the prescribed limit. When in debt, schema changes that add new
splits will be rejected (dropping tables/indexes/partitions/etc. will
work just fine).

When attempting a txn that goes over the configured limit, the UX is as
follows:

    > CREATE TABLE db.t42(i INT PRIMARY KEY);
    pq: exceeded limit for number of table spans

Release note: None
Release justification: low risk, high benefit change

Release note: None
irfansharif added a commit to irfansharif/cockroach that referenced this issue Apr 7, 2022
Fixes cockroachdb#70555. In order to limit the number of span configs a tenant's
able to install, we introduce a tenant-side spanconfig.Limiter. It
presents the following interface:

    // Limiter is used to limit the number of span configs installed by
    // secondary tenants. It takes in a delta (typically the difference
    // in span configs between the committed and uncommitted state in
    // the txn), uses it to maintain an aggregate counter, and informs
    // the caller if exceeding the prescribed limit.
    type Limiter interface {
      ShouldLimit(
        ctx context.Context, txn *kv.Txn, delta int,
      ) (bool, error)
    }

The delta is computed using a static helper, spanconfig.Delta:

    // Delta considers both the committed and uncommitted state of a
    // table descriptor and computes the difference in the number of
    // spans we can apply a configuration over.
    func Delta(
      ctx context.Context, s Splitter,
      committed, uncommitted catalog.TableDescriptor,
    ) (int, error)

This limiter only applies to secondary tenants. The counter is
maintained in a newly introduced (tenant-only) system table, using the
following schema:

    CREATE TABLE system.span_count (
      singleton  BOOL DEFAULT TRUE,
      span_count INT NOT NULL,
      CONSTRAINT "primary" PRIMARY KEY (singleton),
      CONSTRAINT single_row CHECK (singleton),
      FAMILY "primary" (singleton, span_count)
    );

We need just two integration points for spanconfig.Limiter:
- Right above CheckTwoVersionInvariant, where we're able to hook into
  the committed and to-be-committed descriptor state before txn commit;
- In the GC job, when gc-ing table state. We decrement a table's split
  count when GC-ing the table for good.

The per-tenant span config limit used is controlled by a new tenant
read-only cluster setting: spanconfig.tenant_limit. Multi-tenant cluster
settings (cockroachdb#73857) provides the infrastructure for the host tenant to be
able to control this setting cluster wide, or to target a specific
tenant at a time.

We also need a migration here, to start tracking span counts for
clusters with pre-existing tenants. We introduce a migration that scans
over all table descriptors and seeds system.span_count with the right
value. Given cluster version gates disseminate asynchronously, we also
need a preliminary version to start tracking incremental changes.

It's useful to introduce the notion of debt. This will be handy if/when
we lower per-tenant limits, and also in the migration above since it's
possible for pre-existing tenants to have committed state in violation
of the prescribed limit. When in debt, schema changes that add new
splits will be rejected (dropping tables/indexes/partitions/etc. will
work just fine).

When attempting a txn that goes over the configured limit, the UX is as
follows:

    > CREATE TABLE db.t42(i INT PRIMARY KEY);
    pq: exceeded limit for number of table spans

Release note: None
Release justification: low risk, high benefit change

Release note: None
irfansharif added a commit to irfansharif/cockroach that referenced this issue Apr 7, 2022
Fixes cockroachdb#70555. In order to limit the number of span configs a tenant's
able to install, we introduce a tenant-side spanconfig.Limiter. It
presents the following interface:

    // Limiter is used to limit the number of span configs installed by
    // secondary tenants. It takes in a delta (typically the difference
    // in span configs between the committed and uncommitted state in
    // the txn), uses it to maintain an aggregate counter, and informs
    // the caller if exceeding the prescribed limit.
    type Limiter interface {
      ShouldLimit(
        ctx context.Context, txn *kv.Txn, delta int,
      ) (bool, error)
    }

The delta is computed using a static helper, spanconfig.Delta:

    // Delta considers both the committed and uncommitted state of a
    // table descriptor and computes the difference in the number of
    // spans we can apply a configuration over.
    func Delta(
      ctx context.Context, s Splitter,
      committed, uncommitted catalog.TableDescriptor,
    ) (int, error)

This limiter only applies to secondary tenants. The counter is
maintained in a newly introduced (tenant-only) system table, using the
following schema:

    CREATE TABLE system.span_count (
      singleton  BOOL DEFAULT TRUE,
      span_count INT NOT NULL,
      CONSTRAINT "primary" PRIMARY KEY (singleton),
      CONSTRAINT single_row CHECK (singleton),
      FAMILY "primary" (singleton, span_count)
    );

We need just two integration points for spanconfig.Limiter:
- Right above CheckTwoVersionInvariant, where we're able to hook into
  the committed and to-be-committed descriptor state before txn commit;
- In the GC job, when gc-ing table state. We decrement a table's split
  count when GC-ing the table for good.

The per-tenant span config limit used is controlled by a new tenant
read-only cluster setting: spanconfig.tenant_limit. Multi-tenant cluster
settings (cockroachdb#73857) provides the infrastructure for the host tenant to be
able to control this setting cluster wide, or to target a specific
tenant at a time.

We also need a migration here, to start tracking span counts for
clusters with pre-existing tenants. We introduce a migration that scans
over all table descriptors and seeds system.span_count with the right
value. Given cluster version gates disseminate asynchronously, we also
need a preliminary version to start tracking incremental changes.

It's useful to introduce the notion of debt. This will be handy if/when
we lower per-tenant limits, and also in the migration above since it's
possible for pre-existing tenants to have committed state in violation
of the prescribed limit. When in debt, schema changes that add new
splits will be rejected (dropping tables/indexes/partitions/etc. will
work just fine).

When attempting a txn that goes over the configured limit, the UX is as
follows:

    > CREATE TABLE db.t42(i INT PRIMARY KEY);
    pq: exceeded limit for number of table spans

Release note: None
Release justification: low risk, high benefit change

Release note: None
craig bot pushed a commit that referenced this issue Apr 7, 2022
77337: spanconfig: limit # of tenant span configs r=irfansharif a=irfansharif

Fixes #70555. In order to limit the number of span configs a tenant's
able to install, we introduce a tenant-side spanconfig.Limiter. It
presents the following interface:

    // Limiter is used to limit the number of span configs installed by
    // secondary tenants. It considers the committed and uncommitted
    // state of a table descriptor and computes the "span" delta, each
    // unit we can apply a configuration over. It uses these deltas to
    // maintain an aggregate counter, informing the caller if exceeding
    // the configured limit.
    type Limiter interface {
      ShouldLimit(
        ctx context.Context, txn *kv.Txn,
        committed, uncommitted catalog.TableDescriptor,
      ) (bool, error)
    }

This limiter only applies to secondary tenants. The counter is
maintained in a newly introduced (tenant-only) system table, using the
following schema:

    CREATE TABLE system.span_count (
      singleton  BOOL DEFAULT TRUE,
      span_count INT NOT NULL,
      CONSTRAINT "primary" PRIMARY KEY (singleton),
      CONSTRAINT single_row CHECK (singleton),
      FAMILY "primary" (singleton, span_count)
    );

We need just two integration points for spanconfig.Limiter:
- Right above CheckTwoVersionInvariant, where we're able to hook into
  the committed and to-be-committed descriptor state before txn commit.
- In the GC job, when gc-ing table state. We decrement a table's split
  count when GC-ing the table for good.

The per-tenant span config limit used is controlled by a new tenant
read-only cluster setting: spanconfig.tenant_limit. Multi-tenant cluster
settings (#73857) provides the infrastructure for the host tenant to be
able to control this setting cluster wide, or to target a specific
tenant at a time.

We also need a migration here, to start tracking span counts for
clusters with pre-existing tenants. We introduce a migration that scans
over all table descriptors and seeds system.span_count with the right
value. Given cluster version gates disseminate asynchronously, we also
need a preliminary version to start tracking incremental changes.

It's useful to introduce the notion of debt. This will be handy if/when
we lower per-tenant limits, and also in the migration above since it's
possible for pre-existing tenants to have committed state in violation
of the prescribed limit. When in debt, schema changes that add new
splits will be rejected (dropping tables/indexes/partitions/etc. will
work just fine).

When attempting a txn that goes over the configured limit, the UX is as
follows:

    > CREATE TABLE db.t2(i INT PRIMARY KEY);
    pq: exceeded limit for number of table spans

Release note: None
Release justification: low risk, high benefit change

79462: colexecproj: break it down into two packages r=yuzefovich a=yuzefovich

**colexecproj: split up default cmp proj op file into two**

This commit splits up a single file containing two default comparison
projection operators into two files. This is done in preparation of
the following commit (which will move one of the operators to a
different package).

Release note: None

**colexecproj: extract a new package for projection ops with const**

This commit extracts a new `colexecprojconst` package out of
`colexecproj` that contains all projection operators with one
constant argument. This will allow for faster build speeds since both
packages tens of thousands lines of code.

Special care had to be taken for default comparison operator because we
need to generate two files in different packages based on a single
template. I followed the precedent of `sort_partitioner.eg.go` which had
to do the same.

Addresses: #79357.

Release note: None

Co-authored-by: irfan sharif <[email protected]>
Co-authored-by: Yahor Yuzefovich <[email protected]>
blathers-crl bot pushed a commit that referenced this issue Apr 8, 2022
Fixes #70555. In order to limit the number of span configs a tenant's
able to install, we introduce a tenant-side spanconfig.Limiter. It
presents the following interface:

    // Limiter is used to limit the number of span configs installed by
    // secondary tenants. It takes in a delta (typically the difference
    // in span configs between the committed and uncommitted state in
    // the txn), uses it to maintain an aggregate counter, and informs
    // the caller if exceeding the prescribed limit.
    type Limiter interface {
      ShouldLimit(
        ctx context.Context, txn *kv.Txn, delta int,
      ) (bool, error)
    }

The delta is computed using a static helper, spanconfig.Delta:

    // Delta considers both the committed and uncommitted state of a
    // table descriptor and computes the difference in the number of
    // spans we can apply a configuration over.
    func Delta(
      ctx context.Context, s Splitter,
      committed, uncommitted catalog.TableDescriptor,
    ) (int, error)

This limiter only applies to secondary tenants. The counter is
maintained in a newly introduced (tenant-only) system table, using the
following schema:

    CREATE TABLE system.span_count (
      singleton  BOOL DEFAULT TRUE,
      span_count INT NOT NULL,
      CONSTRAINT "primary" PRIMARY KEY (singleton),
      CONSTRAINT single_row CHECK (singleton),
      FAMILY "primary" (singleton, span_count)
    );

We need just two integration points for spanconfig.Limiter:
- Right above CheckTwoVersionInvariant, where we're able to hook into
  the committed and to-be-committed descriptor state before txn commit;
- In the GC job, when gc-ing table state. We decrement a table's split
  count when GC-ing the table for good.

The per-tenant span config limit used is controlled by a new tenant
read-only cluster setting: spanconfig.tenant_limit. Multi-tenant cluster
settings (#73857) provides the infrastructure for the host tenant to be
able to control this setting cluster wide, or to target a specific
tenant at a time.

We also need a migration here, to start tracking span counts for
clusters with pre-existing tenants. We introduce a migration that scans
over all table descriptors and seeds system.span_count with the right
value. Given cluster version gates disseminate asynchronously, we also
need a preliminary version to start tracking incremental changes.

It's useful to introduce the notion of debt. This will be handy if/when
we lower per-tenant limits, and also in the migration above since it's
possible for pre-existing tenants to have committed state in violation
of the prescribed limit. When in debt, schema changes that add new
splits will be rejected (dropping tables/indexes/partitions/etc. will
work just fine).

When attempting a txn that goes over the configured limit, the UX is as
follows:

    > CREATE TABLE db.t42(i INT PRIMARY KEY);
    pq: exceeded limit for number of table spans

Release note: None
Release justification: low risk, high benefit change

Release note: None
@knz knz added T-multitenant Issues owned by the multi-tenant virtual team and removed T-shared-systems Shared Systems Team labels Jun 30, 2023
@knz
Copy link
Contributor

knz commented Oct 4, 2023

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. C-enhancement Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception) T-multitenant Issues owned by the multi-tenant virtual team X-anchored-telemetry The issue number is anchored by telemetry references.
Projects
No open projects
Development

No branches or pull requests

4 participants