Skip to content

Commit

Permalink
Merge #28340 #28848
Browse files Browse the repository at this point in the history
28340: storage: make lease rebalancing decisions at the store level r=a-robinson a=a-robinson

In order to better balance the QPS being served by each store to avoid
overloaded nodes.

Fixes #21419

Release note (performance improvement): Range leases will be
automatically rebalanced throughout the cluster to even out the amount
of QPS being handled by each node.

28848: opt: Fix rule cycle bug r=andy-kimball a=andy-kimball

The PushFilterIntoJoinLeftAndRight and TryDecorrelateSelect rules can cycle
with one another in a rare case:

1. Right side of join has outer column due to being un-decorrelatable.
2. Filter conjunct is pushed down to both left and right side by mapping
   equivalencies in PushFilterIntoJoinLeftAndRight.
3. Left conjunct is pulled back up to join condition by TryDecorrelateSelect.

Steps #2 and #3 will cycle with one another. Cycle detection is not possible
in this case, because the left side keeps changing (because new conjuct is
pushed down to it each time).

The fix is simple: don't let PushFilterIntoJoinLeftAndRight push down filters
if either the left or right side has outer column(s).

This fixes #28818.

Release note: None

Co-authored-by: Alex Robinson <[email protected]>
Co-authored-by: Andrew Kimball <[email protected]>
  • Loading branch information
3 people committed Aug 20, 2018
3 parents 4f74d68 + 094bf14 + c7772ac commit eff8cce
Show file tree
Hide file tree
Showing 15 changed files with 1,128 additions and 57 deletions.
2 changes: 1 addition & 1 deletion docs/generated/settings/settings.html
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
<tr><td><code>kv.allocator.lease_rebalancing_aggressiveness</code></td><td>float</td><td><code>1</code></td><td>set greater than 1.0 to rebalance leases toward load more aggressively, or between 0 and 1.0 to be more conservative about rebalancing leases</td></tr>
<tr><td><code>kv.allocator.load_based_lease_rebalancing.enabled</code></td><td>boolean</td><td><code>true</code></td><td>set to enable rebalancing of range leases based on load and latency</td></tr>
<tr><td><code>kv.allocator.range_rebalance_threshold</code></td><td>float</td><td><code>0.05</code></td><td>minimum fraction away from the mean a store's range count can be before it is considered overfull or underfull</td></tr>
<tr><td><code>kv.allocator.stat_based_rebalancing.enabled</code></td><td>boolean</td><td><code>false</code></td><td>set to enable rebalancing of range replicas based on write load and disk usage</td></tr>
<tr><td><code>kv.allocator.stat_based_rebalancing.enabled</code></td><td>boolean</td><td><code>true</code></td><td>set to enable rebalancing range replicas and leases to more evenly distribute read and write load across the stores in a cluster</td></tr>
<tr><td><code>kv.allocator.stat_rebalance_threshold</code></td><td>float</td><td><code>0.2</code></td><td>minimum fraction away from the mean a store's stats (like disk usage or writes per second) can be before it is considered overfull or underfull</td></tr>
<tr><td><code>kv.bulk_io_write.concurrent_export_requests</code></td><td>integer</td><td><code>5</code></td><td>number of export requests a store will handle concurrently before queuing</td></tr>
<tr><td><code>kv.bulk_io_write.concurrent_import_requests</code></td><td>integer</td><td><code>1</code></td><td>number of import requests a store will handle concurrently before queuing</td></tr>
Expand Down
2 changes: 1 addition & 1 deletion pkg/cmd/roachtest/allocator.go
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ func registerAllocator(r *registry) {
c.Put(ctx, workload, "./workload")

// Start the first `start` nodes and restore the fixture
args := startArgs("--args=--vmodule=allocator=5,allocator_scorer=5,replicate_queue=5")
args := startArgs("--args=--vmodule=store_rebalancer=5,allocator=5,allocator_scorer=5,replicate_queue=5")
c.Start(ctx, c.Range(1, start), args)
db := c.Conn(ctx, 1)
defer db.Close()
Expand Down
180 changes: 180 additions & 0 deletions pkg/cmd/roachtest/rebalance_load.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,180 @@
// Copyright 2018 The Cockroach Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
// implied. See the License for the specific language governing
// permissions and limitations under the License. See the AUTHORS file
// for names of contributors.

package main

import (
"context"
gosql "database/sql"
"fmt"
"io/ioutil"
"os"
"sort"
"strconv"
"time"

"github.com/cockroachdb/cockroach/pkg/util/timeutil"
"golang.org/x/sync/errgroup"
)

func registerRebalanceLoad(r *registry) {
// This test creates a single table for kv to use and splits the table to
// have one range for every node in the cluster. Because even brand new
// clusters start with 20+ ranges in them, the number of new ranges in kv's
// table is small enough that it typically won't trigger rebalancing of
// leases in the cluster based on lease count alone. We let kv generate a lot
// of load against the ranges such that when
// kv.allocator.stat_based_rebalancing.enabled is set to true, we'd expect
// load-based rebalancing to distribute the load evenly across the nodes in
// the cluster. Without that setting, the fact that the kv table has so few
// ranges means that they probablistically won't have their leases evenly
// spread across all the nodes (they'll often just end up staying on n1).
//
// In other words, this test should always pass with
// kv.allocator.stat_based_rebalancing.enabled set to true, while it should
// usually (but not always fail) with it set to false.
rebalanceLoadRun := func(ctx context.Context, t *test, c *cluster, duration time.Duration, concurrency int) {
roachNodes := c.Range(1, c.nodes-1)
appNode := c.Node(c.nodes)

c.Put(ctx, cockroach, "./cockroach", roachNodes)
args := startArgs(
"--args=--vmodule=store_rebalancer=5,allocator=5,allocator_scorer=5,replicate_queue=5")
c.Start(ctx, roachNodes, args)

c.Put(ctx, workload, "./workload", appNode)
c.Run(ctx, appNode, `./workload init kv --drop {pgurl:1}`)

var m *errgroup.Group // see comment in version.go
m, ctx = errgroup.WithContext(ctx)

m.Go(func() error {
c.l.printf("starting load generator\n")

quietL, err := newLogger("run kv", strconv.Itoa(0), "workload"+strconv.Itoa(0), ioutil.Discard, os.Stderr)
if err != nil {
return err
}
splits := len(roachNodes) - 1 // n-1 splits => n ranges => 1 lease per node
return c.RunL(ctx, quietL, appNode, fmt.Sprintf(
"./workload run kv --read-percent=95 --splits=%d --tolerate-errors --concurrency=%d "+
"--duration=%s {pgurl:1-3}",
splits, concurrency, duration.String()))
})

m.Go(func() error {
t.Status(fmt.Sprintf("starting checks for lease balance"))

db := c.Conn(ctx, 1)
defer db.Close()

if _, err := db.ExecContext(
ctx, `SET CLUSTER SETTING kv.allocator.stat_based_rebalancing.enabled=true`,
); err != nil {
return err
}

for tBegin := timeutil.Now(); timeutil.Since(tBegin) <= duration; {
if done, err := isLoadEvenlyDistributed(c.l, db, len(roachNodes)); err != nil {
return err
} else if done {
c.l.printf("successfully achieved lease balance\n")
return nil
}

select {
case <-ctx.Done():
return ctx.Err()
case <-time.After(5 * time.Second):
}
}

return fmt.Errorf("timed out before leases were evenly spread")
})
if err := m.Wait(); err != nil {
t.Fatal(err)
}
}

minutes := 2 * time.Minute
numNodes := 4 // the last node is just used to generate load
concurrency := 128

r.Add(testSpec{
Name: `rebalance-leases-by-load`,
Nodes: nodes(numNodes),
Stable: false, // TODO(a-robinson): Promote to stable
Run: func(ctx context.Context, t *test, c *cluster) {
if local {
concurrency = 32
fmt.Printf("lowering concurrency to %d in local testing\n", concurrency)
}
rebalanceLoadRun(ctx, t, c, minutes, concurrency)
},
})
}

func isLoadEvenlyDistributed(l *logger, db *gosql.DB, numNodes int) (bool, error) {
rows, err := db.Query(
`select lease_holder, count(*) ` +
`from [show experimental_ranges from table kv.kv] ` +
`group by lease_holder;`)
if err != nil {
return false, err
}
defer rows.Close()
leaseCounts := make(map[int]int)
var rangeCount int
for rows.Next() {
var storeID, leaseCount int
if err := rows.Scan(&storeID, &leaseCount); err != nil {
return false, err
}
leaseCounts[storeID] = leaseCount
rangeCount += leaseCount
}
l.printf("numbers of test.kv leases on each store: %v\n", leaseCounts)

if len(leaseCounts) < numNodes {
l.printf("not all nodes have a lease yet: %v\n", leaseCounts)
return false, nil
}

// The simple case is when ranges haven't split. We can require that every
// store has one lease.
if rangeCount == numNodes {
for _, leaseCount := range leaseCounts {
if leaseCount != 1 {
l.printf("uneven lease distribution: %v\n", leaseCounts)
return false, nil
}
}
return true, nil
}

// For completeness, if leases have split, verify the leases per store don't
// differ by any more than 1.
leases := make([]int, 0, numNodes)
for _, leaseCount := range leaseCounts {
leases = append(leases, leaseCount)
}
sort.Ints(leases)
if leases[0]+1 < leases[len(leases)-1] {
l.printf("leases per store differ by more than one: %v\n", leaseCounts)
return false, nil
}

return true, nil
}
1 change: 1 addition & 0 deletions pkg/cmd/roachtest/registry.go
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@ func registerTests(r *registry) {
registerKVSplits(r)
registerLargeRange(r)
registerQueue(r)
registerRebalanceLoad(r)
registerRestore(r)
registerRoachmart(r)
registerScaleData(r)
Expand Down
13 changes: 7 additions & 6 deletions pkg/sql/opt/norm/rules/join.opt
Original file line number Diff line number Diff line change
Expand Up @@ -49,17 +49,18 @@
# Given this mapping, we can safely push the filter down to both sides and
# remove it from the ON filters list.
#
# Note that this rule is only applied to InnerJoin and SemiJoin, not
# InnerJoinApply or SemiJoinApply. The apply variants would cause a
# non-detectable cycle with TryDecorrelateSelect, causing the filters to get
# remapped to both sides and pushed down over and over again.
# Note that this rule is only applied when the left and right inputs do not have
# outer columns. If they do, then this rule can cause undetectable cycles with
# TryDecorrelateSelect, since the filter is pushed down to both sides, but then
# only pulled up from the right side by TryDecorrelateSelect. For this reason,
# the rule also does not apply to InnerJoinApply or SemiJoinApply.
#
# NOTE: It is important that this rule is first among the join filter push-down
# rules.
[PushFilterIntoJoinLeftAndRight, Normalize]
(InnerJoin | SemiJoin
$left:*
$right:*
$left:* & ^(HasOuterCols $left)
$right:* & ^(HasOuterCols $right)
$filters:(Filters
$list:[
...
Expand Down
125 changes: 125 additions & 0 deletions pkg/sql/opt/norm/testdata/rules/join
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,24 @@ TABLE b
└── INDEX primary
└── x int not null

exec-ddl
CREATE TABLE xy (x INT PRIMARY KEY, y INT)
----
TABLE xy
├── x int not null
├── y int
└── INDEX primary
└── x int not null

exec-ddl
CREATE TABLE uv (u INT PRIMARY KEY, v INT)
----
TABLE uv
├── u int not null
├── v int
└── INDEX primary
└── u int not null

# --------------------------------------------------
# EnsureJoinFiltersAnd
# --------------------------------------------------
Expand Down Expand Up @@ -939,6 +957,113 @@ inner-join
└── filters [type=bool, outer=(1,3), constraints=(/1: (/NULL - ]; /3: (/NULL - ]), fd=(1)==(3), (3)==(1)]
└── a = b [type=bool, outer=(1,3), constraints=(/1: (/NULL - ]; /3: (/NULL - ])]

# Regression for issue 28818. Try to trigger undetectable cycle between the
# PushFilterIntoJoinLeftAndRight and TryDecorrelateSelect rules.
opt
SELECT 1
FROM a
WHERE EXISTS (
SELECT 1
FROM xy
INNER JOIN uv
ON EXISTS (
SELECT 1
FROM b
WHERE a.s >= 'foo'
LIMIT 10
)
WHERE
(SELECT s FROM a) = 'foo'
)
----
project
├── columns: "?column?":22(int!null)
├── fd: ()-->(22)
├── distinct-on
│ ├── columns: a.k:1(int!null)
│ ├── grouping columns: a.k:1(int!null)
│ ├── key: (1)
│ └── select
│ ├── columns: a.k:1(int!null) xy.x:6(int!null) u:8(int!null) true_agg:14(bool!null)
│ ├── key: (1,6,8)
│ ├── fd: (1,6,8)-->(14)
│ ├── group-by
│ │ ├── columns: a.k:1(int!null) xy.x:6(int!null) u:8(int!null) true_agg:14(bool)
│ │ ├── grouping columns: a.k:1(int!null) xy.x:6(int!null) u:8(int!null)
│ │ ├── key: (1,6,8)
│ │ ├── fd: (1,6,8)-->(14)
│ │ ├── project
│ │ │ ├── columns: true:13(bool!null) a.k:1(int!null) xy.x:6(int!null) u:8(int!null)
│ │ │ ├── fd: ()-->(13)
│ │ │ ├── inner-join-apply
│ │ │ │ ├── columns: a.k:1(int!null) a.s:4(string) xy.x:6(int!null) u:8(int!null)
│ │ │ │ ├── fd: (1)-->(4)
│ │ │ │ ├── scan a
│ │ │ │ │ ├── columns: a.k:1(int!null) a.s:4(string)
│ │ │ │ │ ├── key: (1)
│ │ │ │ │ └── fd: (1)-->(4)
│ │ │ │ ├── inner-join
│ │ │ │ │ ├── columns: xy.x:6(int!null) u:8(int!null)
│ │ │ │ │ ├── outer: (4)
│ │ │ │ │ ├── inner-join
│ │ │ │ │ │ ├── columns: xy.x:6(int!null) u:8(int!null)
│ │ │ │ │ │ ├── key: (6,8)
│ │ │ │ │ │ ├── select
│ │ │ │ │ │ │ ├── columns: xy.x:6(int!null)
│ │ │ │ │ │ │ ├── key: (6)
│ │ │ │ │ │ │ ├── scan xy
│ │ │ │ │ │ │ │ ├── columns: xy.x:6(int!null)
│ │ │ │ │ │ │ │ └── key: (6)
│ │ │ │ │ │ │ └── filters [type=bool]
│ │ │ │ │ │ │ └── eq [type=bool]
│ │ │ │ │ │ │ ├── subquery [type=string]
│ │ │ │ │ │ │ │ └── max1-row
│ │ │ │ │ │ │ │ ├── columns: a.s:19(string)
│ │ │ │ │ │ │ │ ├── cardinality: [0 - 1]
│ │ │ │ │ │ │ │ ├── key: ()
│ │ │ │ │ │ │ │ ├── fd: ()-->(19)
│ │ │ │ │ │ │ │ └── scan a
│ │ │ │ │ │ │ │ └── columns: a.s:19(string)
│ │ │ │ │ │ │ └── const: 'foo' [type=string]
│ │ │ │ │ │ ├── select
│ │ │ │ │ │ │ ├── columns: u:8(int!null)
│ │ │ │ │ │ │ ├── key: (8)
│ │ │ │ │ │ │ ├── scan uv
│ │ │ │ │ │ │ │ ├── columns: u:8(int!null)
│ │ │ │ │ │ │ │ └── key: (8)
│ │ │ │ │ │ │ └── filters [type=bool]
│ │ │ │ │ │ │ └── eq [type=bool]
│ │ │ │ │ │ │ ├── subquery [type=string]
│ │ │ │ │ │ │ │ └── max1-row
│ │ │ │ │ │ │ │ ├── columns: a.s:19(string)
│ │ │ │ │ │ │ │ ├── cardinality: [0 - 1]
│ │ │ │ │ │ │ │ ├── key: ()
│ │ │ │ │ │ │ │ ├── fd: ()-->(19)
│ │ │ │ │ │ │ │ └── scan a
│ │ │ │ │ │ │ │ └── columns: a.s:19(string)
│ │ │ │ │ │ │ └── const: 'foo' [type=string]
│ │ │ │ │ │ └── true [type=bool]
│ │ │ │ │ ├── limit
│ │ │ │ │ │ ├── outer: (4)
│ │ │ │ │ │ ├── cardinality: [0 - 10]
│ │ │ │ │ │ ├── select
│ │ │ │ │ │ │ ├── outer: (4)
│ │ │ │ │ │ │ ├── scan b
│ │ │ │ │ │ │ └── filters [type=bool, outer=(4), constraints=(/4: [/'foo' - ]; tight)]
│ │ │ │ │ │ │ └── a.s >= 'foo' [type=bool, outer=(4), constraints=(/4: [/'foo' - ]; tight)]
│ │ │ │ │ │ └── const: 10 [type=int]
│ │ │ │ │ └── true [type=bool]
│ │ │ │ └── true [type=bool]
│ │ │ └── projections [outer=(1,6,8)]
│ │ │ └── true [type=bool]
│ │ └── aggregations [outer=(13)]
│ │ └── const-not-null-agg [type=bool, outer=(13)]
│ │ └── variable: true [type=bool, outer=(13)]
│ └── filters [type=bool, outer=(14), constraints=(/14: (/NULL - ]; tight)]
│ └── true_agg IS NOT NULL [type=bool, outer=(14), constraints=(/14: (/NULL - ]; tight)]
└── projections
└── const: 1 [type=int]

# --------------------------------------------------
# PushFilterIntoJoinLeft + PushFilterIntoJoinRight
# --------------------------------------------------
Expand Down
Loading

0 comments on commit eff8cce

Please sign in to comment.