Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release-22.2: spanconfig: deflake spanconfigreconcilerccl/TestDataDriven #105061

Merged
merged 1 commit into from
Jun 27, 2023

Conversation

blathers-crl[bot]
Copy link

@blathers-crl blathers-crl bot commented Jun 16, 2023

Backport 1/1 commits from #98082 on behalf of @irfansharif.

/cc @cockroachdb/release


Fixes #98038. This test set up two protection records over two schema objects at two timestamps ts=3 and ts=4.

   /Table/10{6-7}  protection_policies=[{ts: 3} {ts: 4}]
   /Table/10{7-8}  protection_policies=[{ts: 3} {ts: 4}]

When it later released those protection records:

   release record-id=3
   release record-id=4
   ----

It asserted that the span config mutations showed that we did infact get rid of the protected state:

   mutations
   ----
   delete /Table/10{6-7}
   upsert /Table/10{6-7}      range default
   delete /Table/10{7-8}
   upsert /Table/10{7-8}      range default

But since release of these protections was non-atomic, in #98038 we observed the following transition instead.

   delete /Table/10{6-7}
   upsert /Table/10{6-7}      protection_policies=[{ts: 4}]
   delete /Table/10{7-8}
   upsert /Table/10{7-8}      protection_policies=[{ts: 4}]
   delete /Table/10{6-7}
   upsert /Table/10{6-7}      range default
   delete /Table/10{7-8}
   upsert /Table/10{7-8}      range default

That is, we first got rid of the record with ts=3 and only then got rid of ts=4. We just rewrite the test to assert on the final state of the records that show no remaining protections, instead of trying to add synchronization for mutations.

Release note: None


Fixes #104858
Release justification: fixing a flaky test

@blathers-crl blathers-crl bot requested a review from a team as a code owner June 16, 2023 16:49
@blathers-crl blathers-crl bot force-pushed the blathers/backport-release-22.2-98082 branch from e4d302b to 35d3601 Compare June 16, 2023 16:49
@blathers-crl blathers-crl bot added blathers-backport This is a backport that Blathers created automatically. O-robot Originated from a bot. labels Jun 16, 2023
@blathers-crl blathers-crl bot force-pushed the blathers/backport-release-22.2-98082 branch from 7731020 to c838b7c Compare June 16, 2023 16:49
@blathers-crl
Copy link
Author

blathers-crl bot commented Jun 16, 2023

Thanks for opening a backport.

Please check the backport criteria before merging:

  • Patches should only be created for serious issues or test-only changes.
  • Patches should not break backwards-compatibility.
  • Patches should change as little code as possible.
  • Patches should not change on-disk formats or node communication protocols.
  • Patches should not add new functionality.
  • Patches must not add, edit, or otherwise modify cluster versions; or add version gates.
If some of the basic criteria cannot be satisfied, ensure that the exceptional criteria are satisfied within.
  • There is a high priority need for the functionality that cannot wait until the next release and is difficult to address in another way.
  • The new functionality is additive-only and only runs for clusters which have specifically “opted in” to it (e.g. by a cluster setting).
  • New code is protected by a conditional check that is trivial to verify and ensures that it only runs for opt-in clusters.
  • The PM and TL on the team that owns the changed code have signed off that the change obeys the above rules.

Add a brief release justification to the body of your PR to justify this backport.

Some other things to consider:

  • What did we do to ensure that a user that doesn’t know & care about this backport, has no idea that it happened?
  • Will this work in a cluster of mixed patch versions? Did we test that?
  • If a user upgrades a patch version, uses this feature, and then downgrades, what happens?

@blathers-crl blathers-crl bot requested review from adityamaru and arulajmani June 16, 2023 16:49
@cockroach-teamcity
Copy link
Member

This change is Reviewable

@pav-kv
Copy link
Collaborator

pav-kv commented Jun 16, 2023

The test fixed in this PR fails on CI:

    utils.go:426:
          Error Trace:  /go/src/github.com/cockroachdb/cockroach/pkg/spanconfig/spanconfigtestutils/utils.go:426
                              /go/src/github.com/cockroachdb/cockroach/pkg/ccl/spanconfigccl/spanconfigreconcilerccl/datadriven_test.go:233
                              /go/src/github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/datadriven/datadriven.go:321
                              /go/src/github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/datadriven/datadriven.go:326
                              /go/src/github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/datadriven/datadriven.go:195
                              /go/src/github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/datadriven/datadriven.go:168
                              /go/src/github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/datadriven/datadriven.go:136
                              /go/src/github.com/cockroachdb/cockroach/pkg/ccl/spanconfigccl/spanconfigreconcilerccl/datadriven_test.go:123
                              /go/src/github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/datadriven/datadriven.go:413
                              /go/src/github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/datadriven/datadriven.go:426
          Error:        Should be true
          Test:         TestDataDriven/protectedts
          Messages:     offset (57) larger than number of lines (53)

Likely the data-driven parser for this test on 22.2 is somehow incompatible with this change that was made in master.

@irfansharif Could you take a look?

Fixes #98038. This test set up two protection records over two schema
objects at two timestamps ts=3 and ts=4.

   /Table/10{6-7}  protection_policies=[{ts: 3} {ts: 4}]
   /Table/10{7-8}  protection_policies=[{ts: 3} {ts: 4}]

When it later released those protection records:

   release record-id=3
   release record-id=4
   ----

It asserted that the span config mutations showed that we did infact get
rid of the protected state:

   mutations
   ----
   delete /Table/10{6-7}
   upsert /Table/10{6-7}      range default
   delete /Table/10{7-8}
   upsert /Table/10{7-8}      range default

But since release of these protections was non-atomic, in #98038 we
observed the following transition instead.

   delete /Table/10{6-7}
   upsert /Table/10{6-7}      protection_policies=[{ts: 4}]
   delete /Table/10{7-8}
   upsert /Table/10{7-8}      protection_policies=[{ts: 4}]
   delete /Table/10{6-7}
   upsert /Table/10{6-7}      range default
   delete /Table/10{7-8}
   upsert /Table/10{7-8}      range default

That is, we first got rid of the record with ts=3 and only then got rid
of ts=4. We just rewrite the test to assert on the final state of the
records that show no remaining protections, instead of trying to add
synchronization for mutations.

Release note: None
@irfansharif irfansharif force-pushed the blathers/backport-release-22.2-98082 branch from 0c3aaec to 2078e33 Compare June 27, 2023 21:36
@irfansharif
Copy link
Contributor

(Rebased to pick up #105194.)

@irfansharif irfansharif merged commit 6030b8d into release-22.2 Jun 27, 2023
@irfansharif irfansharif deleted the blathers/backport-release-22.2-98082 branch June 27, 2023 22:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
blathers-backport This is a backport that Blathers created automatically. O-robot Originated from a bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants