Skip to content

Commit

Permalink
archival/tests: yield more often to the scheduler
Browse files Browse the repository at this point in the history
Adding a segment takes ~1ms. Adding multiple segments in a single batch
can stall the reactor for a while. This, together with a recent seastar
change[^1] caused some timeouts to fire very frequently[^2].

Fix the problematic test by splitting the batch passed to add_segments
so that we execute more finer grained tasks and yield more often to the
scheduler.

[^1]: scylladb/seastar#2238
[^2]: redpanda-data#13275
  • Loading branch information
nvartolomei committed May 13, 2024
1 parent 1acaf9d commit 545a04f
Showing 1 changed file with 23 additions and 12 deletions.
35 changes: 23 additions & 12 deletions src/v/archival/tests/ntp_archiver_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -1701,21 +1701,32 @@ static void test_manifest_spillover_impl(
}).get();

// Generate new manifest based on data layout on disk
std::vector<cloud_storage::segment_meta> all_segments;
vlog(test_log.debug, "stm add segments");
auto add_segments =
[&part](std::vector<cloud_storage::segment_meta> segments) {
part->archival_meta_stm()
->add_segments(
std::move(segments),
std::nullopt,
model::producer_id{},
ss::lowres_clock::now() + 1s,
never_abort,
cluster::segment_validated::yes)
.get();
};

// 10 at a time to avoid reactor stalls.
const int max_batch_size = 10;
std::vector<cloud_storage::segment_meta> batch;
for (const auto& s : manifest) {
all_segments.push_back(s);
if (batch.size() >= max_batch_size) {
add_segments(batch);
batch.clear();
}
batch.push_back(s);
}
add_segments(batch);

vlog(test_log.debug, "stm add segments");
part->archival_meta_stm()
->add_segments(
all_segments,
std::nullopt,
model::producer_id{},
ss::lowres_clock::now() + 1s,
never_abort,
cluster::segment_validated::yes)
.get();
vlog(test_log.debug, "stm add segments completed");

vlog(
Expand Down

0 comments on commit 545a04f

Please sign in to comment.