-
Notifications
You must be signed in to change notification settings - Fork 589
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
e2e_shadow_indexing_test: bump timeout for workload #11828
e2e_shadow_indexing_test: bump timeout for workload #11828
Conversation
/ci-repeat 4 |
/ci-repeat 8 |
/ci-repeat 7 |
CI failures: #11944 |
/cdt tests/rptest/tests/e2e_shadow_indexing_test.py::ShadowIndexingManyPartitionsTest.test_many_partitions_shutdown dt-repeat=10 dt-log-level=debug |
eb00e60
to
67d255c
Compare
/cdt tests/rptest/tests/e2e_shadow_indexing_test.py::ShadowIndexingManyPartitionsTest.test_many_partitions_shutdown dt-repeat=10 dt-log-level=debug |
Is this close to done, @andrwng ? These issues are failing quite a bit. |
The fixes have so far whittled the failures down to just the shutdown hang, which seems fairly reliable in CDT. Adding more debug logging to debug; hopefully will get to the root cause today. |
/cdt tests/rptest/tests/e2e_shadow_indexing_test.py::ShadowIndexingManyPartitionsTest.test_many_partitions_shutdown dt-repeat=3 dt-log-level=trace |
/ci-repeat |
/cdt tests/rptest/tests/e2e_shadow_indexing_test.py::ShadowIndexingManyPartitionsTest.test_many_partitions_shutdown dt-repeat=3 dt-log-level=trace |
1 similar comment
/cdt tests/rptest/tests/e2e_shadow_indexing_test.py::ShadowIndexingManyPartitionsTest.test_many_partitions_shutdown dt-repeat=3 dt-log-level=trace |
The latest runs show the Kafker server's |
/ci-repeat |
Empirically, segments may be uploaded ~1/partition/sec, particularly on dockeriszed environments. Thus, the current timeout for the number of segments we want to generate is flaky.
e0e1065
to
810069e
Compare
/cdt tests/rptest/tests/e2e_shadow_indexing_test.py::ShadowIndexingManyPartitionsTest.test_many_partitions_shutdown dt-repeat=30 |
This makes the final cloud storage scrub timeout configurable for tests that expect to write a lot of data.
In CDT this test could end up generating way more data than in dockerized tests, resulting in the consume workload taking much longer than expected. This patch reduces the workload to ensure a more reasonable runtime.
810069e
to
60a0614
Compare
Failure is #12104 |
Empirically, segments may be uploaded ~1/partition/sec, particularly on dockeriszed environments. Thus, the current timeout for the number of segments we want to generate is flaky.
Fixes #11268
Also fixes slow scrubbing, from when there are too many segments to analyze in 30s.
Fixes #11698
Remaining flakiness appears to be fixed by #12756
Backports Required
Release Notes