Skip to content

Commit

Permalink
Merge #47232 #47268
Browse files Browse the repository at this point in the history
47232: opt: corrects function doc for canMaybeConstrainIndex r=mgartner a=mgartner

#### opt: corrects function doc for canMaybeConstrainIndex

This commit corrects a the function documentation for
canMaybeConstrainIndex.

Prior to this change, it did not mention a third check, for tight filter
constraints, that is performed to determinte the possibility that an
index could be constrained by a filer.

Also, the documentation now correctly maps to the logic of the function.
Previously it falsly claimed that if any of the checks were false then
the function would return false. Now it correctly states that if any of
the checks are true, then the fucntion returns true.

Release note: None



47268: roachtest: improve and de-flake version-upgrade r=spaskob a=tbg

roachtest: automate and de-flake version-upgrade

The recent changes to this test seem to have uncovered a latent [bug],
as a result of which the test was skipped. This commit lets the test
start from v19.2 fixtures (instead of v19.1 before) which experimentally
were verified to not exhibit the bug, so the test is unskipped.

While I was there, I automated the base version discovery of this test.
On a branch at version X, the test will now use the fixtures for the
last major release preceding X. We'll occasionally have to bake new
fixtures and update the mapping in PredecessorVersion() but this is
better than having the test rot on older branches whenever we update
it for a new dev cycle.

The root cause of the bug will be fixed separately; this is tracked in
issue #44732.

[bug]: #47235 (comment)

Release note: None


Co-authored-by: Marcus Gartner <[email protected]>
Co-authored-by: Tobias Schottdorf <[email protected]>
  • Loading branch information
3 people committed Apr 9, 2020
3 parents 4bdcec3 + c6423e9 + 6e0b72f commit 3967538
Show file tree
Hide file tree
Showing 3 changed files with 108 additions and 96 deletions.
9 changes: 7 additions & 2 deletions pkg/cmd/roachtest/acceptance.go
Original file line number Diff line number Diff line change
Expand Up @@ -46,8 +46,13 @@ func registerAcceptance(r *testRegistry) {
{name: "status-server", fn: runStatusServer},
{
name: "version-upgrade",
fn: runVersionUpgrade,
skip: "skipped due to flakiness",
fn: func(ctx context.Context, t *test, c *cluster) {
predV, err := PredecessorVersion(r.buildVersion)
if err != nil {
t.Fatal(err)
}
runVersionUpgrade(ctx, t, c, predV)
},
// This test doesn't like running on old versions because it upgrades to
// the latest released version and then it tries to "head", where head is
// the cockroach binary built from the branch on which the test is
Expand Down
180 changes: 92 additions & 88 deletions pkg/cmd/roachtest/versionupgrade.go
Original file line number Diff line number Diff line change
Expand Up @@ -76,84 +76,66 @@ DROP TABLE test.t;
`),
}

const (
headVersion = "HEAD"
)

func runVersionUpgrade(ctx context.Context, t *test, c *cluster) {
func runVersionUpgrade(ctx context.Context, t *test, c *cluster, predecessorVersion string) {
// This is ugly, but we can't pass `--encrypt=false` to old versions of
// Cockroach.
//
// TODO(tbg): revisit as old versions are aged out of this test.
c.encryptDefault = false

const baseVersion = "19.1.5"
// Set the bool within to true to create a new fixture for this test. This
// is necessary after every release. For example, when day `master` becomes
// the 20.2 release, this test will fail because it is missing a fixture for
// 20.1; run the test (on 20.1) with the bool flipped to create the fixture.
// Check it in (instructions are on the 'checkpointer' struct) and off we
// go.
cp := checkpointer{on: false}

u := newVersionUpgradeTest(c, versionUpgradeTestFeatures,
// Load baseVersion fixture. That fixture's cluster version may be
// at the predecessor version, so add a waitForUpgradeStep to make
// sure we're actually running at baseVersion before moving on.
// Start the cluster from a fixture. That fixture's cluster version may
// be at the predecessor version (though in practice it's fully up to
// date, if it was created via the checkpointer above), so add a
// waitForUpgradeStep to make sure we're upgraded all the way before
// moving on.
//
// See the comment on createCheckpoints for details on fixtures.
uploadAndStartFromCheckpointFixture(baseVersion),
uploadAndStartFromCheckpointFixture(predecessorVersion),
waitForUpgradeStep(),

// NB: before the next step, cluster and binary version equals baseVersion,
// NB: at this point, cluster and binary version equal predecessorVersion,
// and auto-upgrades are on.

binaryUpgradeStep("19.2.1"),
waitForUpgradeStep(),

// Each new release has to be added here. When adding a new release,
// you'll probably need to use a release candidate binary. You may also
// want to bake in the last step above, i.e. run this test with
// createCheckpoints=true, update the fixtures (see comments there), and
// then change baseVersion to one release later.

// HEAD gives us the main binary for this roachtest run. We upgrade into
// this version more capriciously to ensure better coverage by first
// rolling the cluster into the new version with auto-upgrade disabled,
// then rolling back, and then rolling forward and finalizing
// (automatically).
// We use an empty string for the version below, which means to use the
// main ./cockroach binary (i.e. the one being tested in this run).
// We upgrade into this version more capriciously to ensure better
// coverage by first rolling the cluster into the new version with
// auto-upgrade disabled, then rolling back, and then rolling forward
// and finalizing on the auto-upgrade path.
preventAutoUpgradeStep(),
// Roll nodes forward.
binaryUpgradeStep("HEAD"),
binaryUpgradeStep(""),
// Roll back again. Note that bad things would happen if the cluster had
// ignored our request to not auto-upgrade. The `autoupgrade` roachtest
// exercises this in more detail, so here we just rely on things working
// as they ought to.
binaryUpgradeStep("19.2.1"),
// Roll nodes forward, this time allowing them to upgrade.
binaryUpgradeStep("HEAD"),
binaryUpgradeStep(predecessorVersion),
// Roll nodes forward, this time allowing them to upgrade, and waiting
// for it to happen.
binaryUpgradeStep(""),
allowAutoUpgradeStep(),
waitForUpgradeStep(),

// This is usually a noop, but if cp.on was set above, this is where we
// create fixtures. This is last so that we're creating the fixtures for
// the binary we're testing (i.e. if this run is on release-20.1, we'll
// create a 20.1 fixture).
cp.createCheckpointsAndFail,
)

u.run(ctx, t)
}

// createCheckpoints is used to "age out" old versions of CockroachDB. We want
// to test data that was created at v1.0, but we don't actually want to run a
// long chain of binaries starting all the way at v1.0. Instead, we periodically
// bake a set of store directories that originally started out on v1.0 and
// maintain it as a fixture for this test.
//
// The checkpoints will be created in the log directories. The test will fail
// on purpose when it's done. After, manually invoke the following to move the
// archives to the right place and commit the result:
//
// for i in 1 2 3 4; do
// mkdir -p pkg/cmd/roachtest/fixtures/${i} && \
// mv artifacts/acceptance/version-upgrade/run_1/${i}.logs/checkpoint-*.tgz \
// pkg/cmd/roachtest/fixtures/${i}/
// done
const createCheckpoints = false

func (u *versionUpgradeTest) run(ctx context.Context, t *test) {
if createCheckpoints {
// We rely on cockroach-HEAD to create checkpoint, so upload it early.
_ = u.uploadVersion(ctx, t, headVersion)
}

defer func() {
for _, db := range u.conns {
_ = db.Close()
Expand All @@ -174,10 +156,6 @@ func (u *versionUpgradeTest) run(ctx context.Context, t *test) {
}
}
}

if createCheckpoints {
t.Fatal("failing on purpose to have store snapshots collected in artifacts")
}
}

type versionUpgradeTest struct {
Expand Down Expand Up @@ -217,7 +195,7 @@ func (u *versionUpgradeTest) conn(ctx context.Context, t *test, i int) *gosql.DB

func (u *versionUpgradeTest) uploadVersion(ctx context.Context, t *test, newVersion string) option {
var binary string
if newVersion == headVersion {
if newVersion == "" {
binary = cockroach
} else {
var err error
Expand All @@ -232,7 +210,10 @@ func (u *versionUpgradeTest) uploadVersion(ctx context.Context, t *test, newVers
}
}

target := "./cockroach-" + newVersion
target := "./cockroach"
if newVersion != "" {
target += "-" + newVersion
}
u.c.Put(ctx, binary, target, u.c.All())
return startArgs("--binary=" + target)
}
Expand Down Expand Up @@ -333,31 +314,6 @@ func binaryUpgradeStep(newVersion string) versionStep {
// test? We could run logictests. We could add custom logic here. Maybe
// this should all be pushed to nightly migration tests instead.
}

if createCheckpoints && newVersion != headVersion {
// If we're taking checkpoints, momentarily stop the cluster (we
// need to do that to get the checkpoints to reflect a
// consistent cluster state). The binary at this point will be
// the new one, but the cluster version was not explicitly
// bumped, though auto-update may have taken place already.
// For example, if newVersion is 2.1, the cluster version in
// the store directories may be 2.0 on some stores and 2.1 on
// the others (though if any are on 2.1, then that's what's
// stored in system.settings).
// This means that when we restart from that version, we're
// going to want to use the binary mentioned in the checkpoint,
// or at least one compatible with the *predecessor* of the
// checkpoint version. For example, for checkpoint-2.1, the
// cluster version might be 2.0, so we can only use the 2.0 or
// 2.1 binary, but not the 19.1 binary (as 19.1 and 2.0 are not
// compatible).
name := checkpointName(newVersion)
c.Stop(ctx, nodes)
c.Run(ctx, c.All(), "./cockroach-HEAD", "debug", "rocksdb", "--db={store-dir}",
"checkpoint", "--checkpoint_dir={store-dir}/"+name)
c.Run(ctx, c.All(), "tar", "-C", "{store-dir}/"+name, "-czf", "{log-dir}/"+name+".tgz", ".")
c.Start(ctx, t, nodes, args, startArgsDontEncrypt, roachprodArgOption{"--sequential=false"})
}
}
}

Expand Down Expand Up @@ -398,15 +354,11 @@ func waitForUpgradeStep() versionStep {

for i := 1; i <= c.spec.NodeCount; i++ {
err := retry.ForDuration(30*time.Second, func() error {
db := u.conn(ctx, t, i)

var currentVersion string
if err := db.QueryRow("SHOW CLUSTER SETTING version").Scan(&currentVersion); err != nil {
t.Fatalf("%d: %s", i, err)
}
currentVersion := u.clusterVersion(ctx, t, i).String()
if currentVersion != newVersion {
return fmt.Errorf("%d: expected version %s, got %s", i, newVersion, currentVersion)
}
t.l.Printf("%s: acked by n%d", currentVersion, i)
return nil
})
if err != nil {
Expand Down Expand Up @@ -438,3 +390,55 @@ func stmtFeatureTest(
},
}
}

// checkpointer creates fixtures to "age out" old versions of CockroachDB. We
// want to test data that was created at v1.0, but we don't actually want to run
// a long chain of binaries starting all the way at v1.0. Instead, we
// periodically bake a set of store directories that originally started out on
// v1.0 and maintain it as a fixture for this test.
//
// The checkpoints will be created in the log directories downloaded as part of
// the artifacts. The test will fail on purpose when it's done. After, manually
// invoke the following to move the archives to the right place and commit the
// result:
//
// for i in 1 2 3 4; do
// mkdir -p pkg/cmd/roachtest/fixtures/${i} && \
// mv artifacts/acceptance/version-upgrade/run_1/${i}.logs/checkpoint-*.tgz \
// pkg/cmd/roachtest/fixtures/${i}/
// done
type checkpointer struct {
on bool
}

func (cp *checkpointer) createCheckpointsAndFail(
ctx context.Context, t *test, u *versionUpgradeTest,
) {
if !cp.on {
return
}
c := u.c
nodes := u.c.All()
// If we're taking checkpoints, momentarily stop the cluster (we
// need to do that to get the checkpoints to reflect a
// consistent cluster state). The binary at this point will be
// the new one, but the cluster version was not explicitly
// bumped, though auto-update may have taken place already.
// For example, if newVersion is 2.1, the cluster version in
// the store directories may be 2.0 on some stores and 2.1 on
// the others (though if any are on 2.1, then that's what's
// stored in system.settings).
// This means that when we restart from that version, we're
// going to want to use the binary mentioned in the checkpoint,
// or at least one compatible with the *predecessor* of the
// checkpoint version. For example, for checkpoint-2.1, the
// cluster version might be 2.0, so we can only use the 2.0 or
// 2.1 binary, but not the 19.1 binary (as 19.1 and 2.0 are not
// compatible).
name := checkpointName(u.binaryVersion(ctx, t, 1).String())
c.Stop(ctx, nodes)
c.Run(ctx, c.All(), "./cockroach", "debug", "rocksdb", "--db={store-dir}",
"checkpoint", "--checkpoint_dir={store-dir}/"+name)
c.Run(ctx, c.All(), "tar", "-C", "{store-dir}/"+name, "-czf", "{log-dir}/"+name+".tgz", ".")
t.Fatalf("successfully created checkpoints; failing test on purpose")
}
15 changes: 9 additions & 6 deletions pkg/sql/opt/xform/custom_funcs.go
Original file line number Diff line number Diff line change
Expand Up @@ -969,13 +969,16 @@ func (c *CustomFuncs) allInvIndexConstraints(
return constraints, true
}

// canMaybeConstrainIndex performs two checks that can quickly rule out the
// possibility that the given index can be constrained by the specified filter:
// canMaybeConstrainIndex returns true if we should try to constrain a given
// index by the given filter. It returns false if it is impossible for the
// filter can constrain the scan.
//
// 1. If the filter does not reference the first index column, then no
// constraint can be generated.
// 2. If none of the filter's constraints start with the first index column,
// then no constraint can be generated.
// If any of the three following statements are true, then it is
// possible that the index can be constrained:
//
// 1. The filter references the first index column.
// 2. The constraints are not tight (see props.Scalar.TightConstraints).
// 3. Any of the filter's constraints start with the first index column.
//
func (c *CustomFuncs) canMaybeConstrainIndex(
filters memo.FiltersExpr, tabID opt.TableID, indexOrd int,
Expand Down

0 comments on commit 3967538

Please sign in to comment.