Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release-22.1: server: improve visibility of ranges that fail to move during decommissioning #79516

Merged

Conversation

cameronnunez
Copy link
Contributor

Backport for 2 commits, 1/1 from #76516 and 1/1 from #79157.

cc @cockroachdb/release


Fixes #76249. Informs #74158.

This patch makes it so that when a decommission is slow or stalls, the
descriptions of some "stuck" replicas are printed to the operator.

Release note (cli change): if decommissioning is slow or stalls, decommissioning
replicas are printed to the operator.

Release justification: low risk, high benefit changes to existing functionality

@cameronnunez cameronnunez requested a review from knz April 6, 2022 16:45
@cameronnunez cameronnunez requested review from a team as code owners April 6, 2022 16:45
@blathers-crl
Copy link

blathers-crl bot commented Apr 6, 2022

Thanks for opening a backport.

Please check the backport criteria before merging:

  • Patches should only be created for serious issues or test-only changes.
  • Patches should not break backwards-compatibility.
  • Patches should change as little code as possible.
  • Patches should not change on-disk formats or node communication protocols.
  • Patches should not add new functionality.
  • Patches must not add, edit, or otherwise modify cluster versions; or add version gates.
If some of the basic criteria cannot be satisfied, ensure that the exceptional criteria are satisfied within.
  • There is a high priority need for the functionality that cannot wait until the next release and is difficult to address in another way.
  • The new functionality is additive-only and only runs for clusters which have specifically “opted in” to it (e.g. by a cluster setting).
  • New code is protected by a conditional check that is trivial to verify and ensures that it only runs for opt-in clusters.
  • The PM and TL on the team that owns the changed code have signed off that the change obeys the above rules.

Add a brief release justification to the body of your PR to justify this backport.

Some other things to consider:

  • What did we do to ensure that a user that doesn’t know & care about this backport, has no idea that it happened?
  • Will this work in a cluster of mixed patch versions? Did we test that?
  • If a user upgrades a patch version, uses this feature, and then downgrades, what happens?

@cockroach-teamcity
Copy link
Member

This change is Reviewable

@knz
Copy link
Contributor

knz commented Apr 12, 2022

@cameronnunez can you check what is failing in CI?

@cameronnunez
Copy link
Contributor Author

looks like an unrelated test failed:

------- Stdout: -------
=== RUN   TestRefresh/needs_refresh,_no_change
    cache_test.go:162:
          Error Trace:  cache_test.go:623
                              cache_test.go:162
          Error:        Not equal:
                        expected: 1
                        actual  : 2
          Test:         TestRefresh/needs_refresh,_no_change
    --- FAIL: TestRefresh/needs_refresh,_no_change (0.00s)

@knz
Copy link
Contributor

knz commented May 12, 2022

you can get this merged now

@cameronnunez cameronnunez force-pushed the backport22.1-76516-79157 branch from 18dc729 to ebb570e Compare May 27, 2022 15:38
…ssioning

This patch makes it so that when a decommission is slow or stalls, the
descriptions of some "stuck" replicas are printed to the operator.

Release note (cli change): if decommissioning is slow or stalls, decommissioning
replicas are printed to the operator.

Release justification: low risk, high benefit changes to existing functionality
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants