Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

backupccl: TestScheduleChainingLifecycle is flaky #88575

Closed
nicktrav opened this issue Sep 23, 2022 · 7 comments · Fixed by #95606
Closed

backupccl: TestScheduleChainingLifecycle is flaky #88575

nicktrav opened this issue Sep 23, 2022 · 7 comments · Fixed by #95606
Assignees
Labels
A-disaster-recovery A-jobs C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. T-jobs

Comments

@nicktrav
Copy link
Collaborator

nicktrav commented Sep 23, 2022

Describe the problem

Observed in this CI run.

[TestScheduleChainingLifecycle/cluster] Failed
[23:58:19][TestScheduleChainingLifecycle/cluster] === RUN   TestScheduleChainingLifecycle/cluster
    schedule_pts_chaining_test.go:86: 
        	Error Trace:	/home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/6617/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-fastbuild/bin/pkg/ccl/backupccl/backupccl_test_/backupccl_test.runfiles/com_github_cockroachdb_cockroach/pkg/ccl/backupccl/schedule_pts_chaining_test.go:86
        	            				/home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/6617/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-fastbuild/bin/pkg/ccl/backupccl/backupccl_test_/backupccl_test.runfiles/com_github_cockroachdb_cockroach/pkg/ccl/backupccl/schedule_pts_chaining_test.go:178
        	Error:      	Not equal: 
        	            	expected: hlc.Timestamp{WallTime:1663890902552441000, Logical:0, Synthetic:false}
        	            	actual  : hlc.Timestamp{WallTime:1663890902571559000, Logical:0, Synthetic:false}
        	            	
        	            	Diff:
        	            	--- Expected
        	            	+++ Actual
        	            	@@ -1,3 +1,3 @@
        	            	 (hlc.Timestamp) {
        	            	- WallTime: (int64) 1663890902552441000,
        	            	+ WallTime: (int64) 1663890902571559000,
        	            	  Logical: (int32) 0,
        	Test:       	TestScheduleChainingLifecycle/cluster
    --- FAIL: TestScheduleChainingLifecycle/cluster (1.00s)

To Reproduce

Attempted to repro locally via --stress, but couldn't observe a failure in a ~10 min run.

$ ./dev test ./pkg/ccl/backupccl --filter TestScheduleChainingLifecycle/cluster --stress

Jira issue: CRDB-19847

@nicktrav nicktrav added C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. A-disaster-recovery T-disaster-recovery labels Sep 23, 2022
@blathers-crl
Copy link

blathers-crl bot commented Sep 23, 2022

cc @cockroachdb/disaster-recovery

@andreimatei
Copy link
Contributor

this happened to me too

@msbutler
Copy link
Collaborator

@adityamaru I see that you were assigned to the 22.2 issue. Have you looked into this yet? If not, i can take a look.

@adityamaru
Copy link
Contributor

Haven't looked at it yet, go for it! Thank you and let me know if you want me to take over at any point.

@msbutler msbutler self-assigned this Nov 18, 2022
@msbutler
Copy link
Collaborator

msbutler commented Nov 18, 2022

I haven't been able to repro the failure under stress, but I'll describe what Nick's failure indicates:

In the test, we run an incremental backup schedule. Once the first incremental backup finishes, we expect the inc schedule PTS to update from the full backup's endtime to the incremental backup's endtime.
https://github.com/cockroachdb/cockroach/blob/master/pkg/ccl/backupccl/schedule_pts_chaining_test.go#L178

In nick's failure, the expected pts time is 19 Milliseconds less than the actual PTS on the schedule. The mystery is why the actual PTS has advanced more than expected.

@stevendanna
Copy link
Collaborator

In the most recent failure of this seen on teamcity Unit Test 7573004

One interesting thing is that from the test output, it appears we expect a full backup to run, followed, by an incremental, followed by a full.

But, in the failing case here, we see that there are two full backups scheduled before the incremental:

I221117 23:55:12.688043 720485 ccl/backupccl/schedule_exec.go:94 ⋮ [schedule=814805524865744897] 132  Starting scheduled backup 814805524865744897: ‹BACKUP INTO 'nodelocal://1/cluster' AS OF SYSTEM TIME '2022-11-17 23:55:12.137172+00:00' WITH revision_history = true, detached›
....
I221117 23:55:13.071353 720485 ccl/backupccl/schedule_exec.go:94 ⋮ [schedule=814805524865744897] 184  Starting scheduled backup 814805524865744897: ‹BACKUP INTO 'nodelocal://1/cluster' AS OF SYSTEM TIME '2022-11-18 00:00:00+00:00' WITH revision_history = true, detached›
....
I221117 23:55:13.090470 720485 ccl/backupccl/schedule_exec.go:94 ⋮ [schedule=814805524856340481] 186  Starting scheduled backup 814805524856340481: ‹BACKUP INTO LATEST IN 'nodelocal://1/cluster' AS OF SYSTEM TIME '2022-11-18 00:00:00+00:00' WITH revision_history = true, detached›

@msbutler
Copy link
Collaborator

msbutler commented Dec 1, 2022

I think I figured it out. Notice that Nick's failure and the latest failure Steven looked at failed around midnight (spooky). The backup schedule in this test uses the following crontab recurrence: '*/5 * * * *'. In english, this means "run a full backup now, and then run a full backup every day at midnight, and an incremental every 5 minutes. As Steven points out, this test relies on an incremental backup running after the first full backup. But, what happens if the first full backup gets scheduled to run within 5 minutes of midnight? A second full backup may get scheduled before the expected incremental backup, breaking the invariant the test expects.

As you can see from the log messages above, these backups are actually getting scheduled before their AOST's, which I believe is due to the timestamp tomfoolery Aditya added to the test to make it run more quickly.

I think the easiest fix for this is to 1) lower the incremental schedule frequency to 1 minute, 2) check at the top of the test if we're close to midnight, and if so, just skip the test.

This story also explains why we couldn't repro the failure: we weren't running the test close to midnight.

msbutler added a commit to msbutler/cockroach that referenced this issue Jan 20, 2023
This patch will skip the test if the machine clock is close to midnight, and
increases the frequency of incremental backups to run every 2 minutes.

Previously, the backup schedule in this test used the following crontab
recurrence: '*/5 * * * *'. In english, this means "run a full backup now, and
then run a full backup every day at midnight, and an incremental every 5
minutes. This test also relies on an incremental backup running after the first
full backup. But, what happens if the first full backup gets scheduled to run
within 5 minutes of midnight? A second full backup may get scheduled before the
expected incremental backup, breaking the invariant the test expects.

Fixes cockroachdb#88575 cockroachdb#95186

Release note: None
craig bot pushed a commit that referenced this issue Jan 20, 2023
…95564 #95583 #95606

90222: server: add api for decommission pre-flight checks r=AlexTalks a=AlexTalks

While we have an API for checking the status of an in-progress decommission, we did not previously have an API to execute sanity checks prior to requesting a node to move into the `DECOMMISSIONING` state. This adds an API to do just that, intended to be called by the CLI prior to issuing a subsequent `Decommission` RPC request.

Fixes #91568.

Release note: None

91458: roachtest/mixed-versions-compat: use corpus for master r=fqazi a=fqazi

Informs: #91350

The CI build scripts take advantage of the branch name to uplaod the corpus to GCS. Unfortunately, we have no way of know if the current build is master inside the roachtest. To address this, this patch supports fetching the master corpus as a fallback.

Release note: None

92826: multiregionccl: add a cold start latency test  r=ajwerner a=ajwerner

This commit adds a test which creates an MR serverless cluster and then
boots the sql pods in each region while disallowing connectivity to other
regions. It also simulates latency to make sure the routing logic works
and to provide a somewhat realistic picture of what to expect.

Epic: CRDB-18596

Release note: None

93758: server: evaluate decommission pre-checks r=kvoli a=AlexTalks

This adds support for the evaluation of the decommission readiness of a
node (or set of nodes), by simulating their liveness to have the
DECOMMISSIONING status and utilizing the allocator to ensure that we are
able to perform any actions needed to repair the range. This supports a
"strict" mode, in which case we expect all ranges to only need
replacement or removal due to the decommissioning status, or a more
permissive "non-strict" mode, which allows for other actions needed, as
long as they do not encounter errors in finding a suitable allocation
target. The non-strict mode allows us to permit situations where a range
may have more than one action needed to repair it, such as a range that
needs to reach its replication factor before the decommissioning replica
can be replaced, or a range that needs to finalize an atomic replication
change.

Depends on #94024.

Part of #91568

95007: admission: CPU slot adjustment and utilization metrics r=irfansharif a=sumeerbhola

Our existing metrics are gauges (total and used slots) which don't give us insight into what is happening at smaller time scales. This creates uncertainty when we observe admission queueing but the gauge samples show total slots consistenly greater than used slots. Additionally, if total slots is steady during queuing, it doesn't tell us whether that was because of roughly matching increments or decrements, or no increments/decrements.

The following metrics are added:
- admission.granter.slots_exhausted_duration.kv: cumulative duration when the slots were exhausted. This can give insight into how much exhaustion was occurring. It is insufficient to tell us whether 0.5sec/sec of exhaustion is due to a long 500ms of exhaustion and then non-exhaustion or alternating 1ms of exhaustion and non-exhaustion. But this is an improvement over what we have.
- admission.granter.slot_adjuster_{increments,decrements}.kv: Counts the increments and decrements of the total slots.
- admission.granter.cpu_load_{short,long}_period_duration.kv: cumulative duration of short and long ticks, as indicated by the period in the CPULoad callback. We don't expect long period ticks when admission control is active (and we explicitly disable enforcement during long period ticks), but it helps us eliminate some hypothesis during incidents (e.g. long period ticks alternating with short period ticks causing a slow down in how fast we increment slots). Additionally, the sum of the rate of these two, if significantly < 1, would indicate that CPULoad frequency is lower than expected, say due to CPU overload.

Fixes #92673

Epic: none

Release note: None

95145: sql/stats: include partial stats in results of statsCache.GetTableStats r=rytaft a=michae2

We were not including partial stats in the list of table statistics returned by `statsCache.GetTableStats`. This was fine for the optimizer, which currently cannot use partial stats directly, but it was a problem for backup.

We'd like to use partial stats directly in the optimizer eventually, so this commit goes ahead and adds them to the results of `GetTableStats`. The optimizer then must filter them out. To streamline this we add some helper functions.

Finally, in an act of overzealous refactoring, this commit also changes `MergedStatistics` and `ForecastTableStatistics` to accept partial statistics and full statistics mixed together in the same input list. This simplifies the code that calls these functions.

Fixes: #95056
Part of: #93983

Epic: CRDB-19449

Release note: None

95387: kvserver: separate loadstats cpu nanos to raft/req r=andrewbaptist a=kvoli

Previously, loadstats tracked replica raft/request cpu nanos per second separately but returned both summed together in `load.ReplicaLoadStats`. This patch separates `RaftCPUNanosPerSecond` and
`RequestCPUNanosPerSecond` in the returned `load.ReplicaLoadStats` so that they may be used independently.

Informs #95380

Release note: None

95557: sql: Fix testing_optimizer_disable_rule_probability usage with vtables r=cucaroach a=cucaroach

If a vtable scan query tries to use the dummy "0" column the exec
builder errors out, this typically won't happen thanks to prune columns
normalization rules and those rules are marked as "essential" but the
logic allowing those rules to be applied was flawed.

Epic: CRDB-20535
Informs: #94890
Release note: None


95559: rpc: fix comment r=andreimatei a=andreimatei

This copy-pasta comment was mentioning a KV node, which was not right.

Release note: None
Epic: None

95564: cpustopwatch: s/grunning.Difference/grunning.Elapsed r=irfansharif a=irfansharif

`grunning.Elapsed()` is the API to use when measuring the running time spent doing some piece of work, with measurements from the start and end. This only exists due to `grunning.Time()`'s non-monotonicity, a bug in our runtime patch: #95529. The bug results in slight {over,under}-estimation of the running time (the latter breaking monotonicity), but is livable with our current uses of this library, including the one here in cpustopwatch. `grunning.Elapsed()` papers over this bug by 0-ing out when `grunning.Time()`stamps regress. This is unlike `grunning.Difference()` which would return the absolute value of the regression -- not what we want here.

Release note: None

95583: sql: fix cluster setting propagation flake take 2 r=cucaroach a=cucaroach

Previously we tried to fix this with one retry but that was
insufficient.  Extend it to all queries in this section of the test.

Release note: None
Epic: CRDB-20535


95606: backupccl: deflake TestScheduleChainingLifecycle r=adityamaru a=msbutler

This patch will skip the test if the machine clock is close to midnight, and increases the frequency of incremental backups to run every 2 minutes.

Previously, the backup schedule in this test used the following crontab recurrence: '*/5 * * * *'. In english, this means "run a full backup now, and then run a full backup every day at midnight, and an incremental every 5 minutes. This test also relies on an incremental backup running after the first full backup. But, what happens if the first full backup gets scheduled to run within 5 minutes of midnight? A second full backup may get scheduled before the expected incremental backup, breaking the invariant the test expects.

Fixes #88575 #95186

Release note: None

Co-authored-by: Alex Sarkesian <[email protected]>
Co-authored-by: Faizan Qazi <[email protected]>
Co-authored-by: Andrew Werner <[email protected]>
Co-authored-by: sumeerbhola <[email protected]>
Co-authored-by: Michael Erickson <[email protected]>
Co-authored-by: Austen McClernon <[email protected]>
Co-authored-by: Tommy Reilly <[email protected]>
Co-authored-by: Andrei Matei <[email protected]>
Co-authored-by: irfan sharif <[email protected]>
Co-authored-by: Michael Butler <[email protected]>
@craig craig bot closed this as completed in #95606 Jan 20, 2023
msbutler added a commit that referenced this issue Jan 23, 2023
This patch will skip the test if the machine clock is close to midnight, and
increases the frequency of incremental backups to run every 2 minutes.

Previously, the backup schedule in this test used the following crontab
recurrence: '*/5 * * * *'. In english, this means "run a full backup now, and
then run a full backup every day at midnight, and an incremental every 5
minutes. This test also relies on an incremental backup running after the first
full backup. But, what happens if the first full backup gets scheduled to run
within 5 minutes of midnight? A second full backup may get scheduled before the
expected incremental backup, breaking the invariant the test expects.

Fixes #88575 #95186

Release note: None
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-disaster-recovery A-jobs C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. T-jobs
Projects
No open projects
Archived in project
Development

Successfully merging a pull request may close this issue.

6 participants