Skip to content

Commit

Permalink
Merge #85722 #85786
Browse files Browse the repository at this point in the history
85722: admission: add support for disk bandwidth as a bottleneck resource r=tbg,irfansharif a=sumeerbhola

We assume that:
- There is a provisioned known limit on the sum of read and write
  bandwidth. This limit is allowed to change.
- Admission control can only shape the rate of admission of writes. Writes
  also cause reads, since compactions do reads and writes.

There are multiple challenges:
- We are unable to precisely track the causes of disk read bandwidth, since
  we do not have observability into what reads missed the OS page cache.
  That is, we don't know how much of the reads were due to incoming reads
  (that we don't shape) and how much due to compaction read bandwidth.
- We don't shape incoming reads.
- There can be a large time lag between the shaping of incoming writes, and when
  it affects actual writes in the system, since compaction backlog can
  build up in various levels of the LSM store.
- Signals of overload are coarse, since we cannot view all the internal
  queues that can build up due to resource overload. For instance,
  different examples of bandwidth saturation exhibit different
  latency effects, presumably because the queue buildup is different. So it
  is non-trivial to approach full utilization without risking high latency.

Due to these challenges, and previous design attempts that were quite
complicated (and incomplete), we adopt a goal of simplicity of design, and strong
abstraction boundaries.
- The disk load is abstracted using an enum. The diskLoadWatcher can be
  evolved independently.
- The approach uses easy to understand small multiplicative increase and 
  large multiplicative decrease, (unlike what we do for flush and compaction 
  tokens, where we try to more precisely calculate the sustainable rates).

Since we are using a simple approach that is somewhat coarse in its behavior,
we start by limiting its application to two kinds of writes:
- Incoming writes that are deemed "elastic": This can be done by
  introducing a work-class (in addition to admissionpb.WorkPriority), or by
  implying a work-class from the priority (e.g. priorities < NormalPri are
  deemed elastic). This prototype does the latter.
- Optional compactions: We assume that the LSM store is configured with a
  ceiling on number of regular concurrent compactions, and if it needs more
  it can request resources for additional (optional) compactions. These
  latter compactions can be limited by this approach. See
  cockroachdb/pebble#1329 for motivation. This control on compactions
  is not currently implemented and is future work (though the prototype
  in #82813 had code for
  it).

The reader should start with disk_bandwidth.go, consisting of
- diskLoadWatcher: which computes load levels.
- diskBandwidthLimiter: It used the load level computed by diskLoadWatcher
   to limit write tokens for elastic writes and in the future will also
   limit compactions.

There is significant refactoring and changes in granter.go and
work_queue.go. This is driven by the fact that:
- Previously the tokens were for L0 and now we need to support tokens for
  bytes into L0 and tokens for bytes into the LSM (the former being a subset
  of the latter).
- Elastic work is in a different WorkQueue than regular work, but they
  are competing for the same tokens. A different WorkQueue is needed to
  prevent a situation where elastic work for one tenant is queued ahead
  of regualar work from another tenant, and stops the latter from making
  progress due to lack of elastic tokens.

The latter is handled by allowing kvSlotGranter to multiplex across
multiple requesters, via multiple child granters. A number of interfaces
are adjusted to make this viable. In general, the GrantCoordinator
is now slightly dumber and some of that logic is moved into the granters.

For the former (handling two kinds of tokens), I considered adding multiple
resource dimensions to the granter-requester interaction but found it
too complicated. Instead we rely on the observation that we request
tokens based on the total incoming bytes of the request (not just L0),
and when the request is completed, tell the granter how many bytes
went into L0. The latter allows us to return tokens to L0. So at the
time the request is completed, we can account separately for the L0
tokens and these new tokens for all incoming bytes (which we are calling
disk bandwidth tokens, since they are constrained based on disk bandwidth).

This is a cleaned up version of the prototype in
#82813 which contains the
experimental results. The plumbing from the KV layer to populate the
disk reads, writes and provisioned bandwidth is absent in this PR,
and will be added in a subsequent PR.

Disk bandwidth bottlenecks are considered only if both the following
are true:
- DiskStats.ProvisionedBandwidth is non-zero.
- The cluster setting admission.disk_bandwidth_tokens.elastic.enabled
  is true (defaults to true).

Informs #82898

Release note: None (the cluster setting mentioned earlier is useless
since the integration with CockroachDB will be in a future PR).

85786: sql: support UDFs with named args, strictness, and volatility r=mgartner a=mgartner

#### sql: UDF with empty result should evaluate to NULL

If the last statement in a UDF returns no rows, the UDF will evaluate to
NULL. Prior to this commit the evaluation of the UDF would panic.

Release note: None

#### sql: support UDFs with named arguments

UDFs with named arguments can now be evaluated.

During query planning, statements in the function body are built with a
scope that includes the named arguments for the function as columns.
This allows references to arguments to be resolved as variables.

During evaluation, the input expressions are first evaluated into
datums. When a plan is built for each statement in the UDF, the argument
columns in the expression are replaced with the input datums before the
expression is optimized.

Note that anonymous arguments and integer references to arguments (e.g.,
`$1`) are not yet supported.

Also, the formatting of `UDFExpr`s has been improved to show argument
columns and input expressions.

Release note: None

#### sql: do not evaluate strict UDFs if any input values are NULL

A UDF can have one of two behaviors when it is invoked with NULL inputs:

  1. If the UDF is `CALLED ON NULL INPUT` (the default) then the
     function is evaluated regardless of whether or not any of the input
     values are NULL.
  2. If the UDF `RETURNS NULL ON NULL INPUT` or is `STRICT` then the
     function is not evaluated if any of the input values are NULL.
     Instead, the function directly results in NULL.

This commit implements these two behaviors.

In the future, we can add a normalization rule that folds a strict UDF
if any of its inputs are constant NULL values.

Release note: None

#### sql: make mutations visible to volatile UDFs

The volatility of a UDF affects the visibility of mutations made by the
statement calling the function. A volatile function will see these
mutations. Also, statements within a volatile function's body will see
changes made by previous statements the function body (note that this is
left untested in this commit because we do not currently support
mutations within UDF bodies). In contrast, a stable, immutable, or
leakproof function will see a snapshot of the data as of the start of
the statement calling the function.

Release note: None


Co-authored-by: sumeerbhola <[email protected]>
Co-authored-by: Marcus Gartner <[email protected]>
  • Loading branch information
3 people committed Aug 12, 2022
3 parents 468ac01 + 88ee320 + c4bf42a commit e17eb36
Show file tree
Hide file tree
Showing 33 changed files with 3,214 additions and 977 deletions.
1 change: 1 addition & 0 deletions docs/generated/settings/settings.html
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
<table>
<thead><tr><th>Setting</th><th>Type</th><th>Default</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>admission.disk_bandwidth_tokens.elastic.enabled</code></td><td>boolean</td><td><code>true</code></td><td>when true, and provisioned bandwidth for the disk corresponding to a store is configured, tokens for elastic work will be limited if disk bandwidth becomes a bottleneck</td></tr>
<tr><td><code>admission.epoch_lifo.enabled</code></td><td>boolean</td><td><code>false</code></td><td>when true, epoch-LIFO behavior is enabled when there is significant delay in admission</td></tr>
<tr><td><code>admission.epoch_lifo.epoch_closing_delta_duration</code></td><td>duration</td><td><code>5ms</code></td><td>the delta duration before closing an epoch, for epoch-LIFO admission control ordering</td></tr>
<tr><td><code>admission.epoch_lifo.epoch_duration</code></td><td>duration</td><td><code>100ms</code></td><td>the duration of an epoch, for epoch-LIFO admission control ordering</td></tr>
Expand Down
2 changes: 1 addition & 1 deletion pkg/sql/faketreeeval/evalctx.go
Original file line number Diff line number Diff line change
Expand Up @@ -391,7 +391,7 @@ func (ep *DummyEvalPlanner) EvalSubquery(expr *tree.Subquery) (tree.Datum, error

// EvalRoutineExpr is part of the eval.Planner interface.
func (ep *DummyEvalPlanner) EvalRoutineExpr(
ctx context.Context, expr *tree.RoutineExpr,
ctx context.Context, expr *tree.RoutineExpr, input tree.Datums,
) (tree.Datum, error) {
return nil, errors.WithStack(errEvalPlanner)
}
Expand Down
Loading

0 comments on commit e17eb36

Please sign in to comment.