Skip to content

Commit

Permalink
Fix testForceMergeWithSoftDeletesRetentionAndRecoverySource (#48766)
Browse files Browse the repository at this point in the history
This test failure manifests the limitation of the recovery source merge
policy explained in #41628. If we already merge down to a single segment
then subsequent force merges will be noop although they can prune
recovery source. We need to adjust this test until we have a fix for the
merge policy.

Relates #41628
Closes #48735
  • Loading branch information
dnhatn committed Nov 3, 2019
1 parent df8346f commit 28fcf20
Showing 1 changed file with 19 additions and 11 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -1694,18 +1694,26 @@ public void testForceMergeWithSoftDeletesRetentionAndRecoverySource() throws Exc
settings.put(IndexSettings.INDEX_SOFT_DELETES_RETENTION_OPERATIONS_SETTING.getKey(), 0);
indexSettings.updateIndexMetaData(IndexMetaData.builder(defaultSettings.getIndexMetaData()).settings(settings).build());
engine.onSettingsChanged();
// If the global checkpoint equals to the local checkpoint, the next force-merge will be a noop
// because all deleted documents are expunged in the previous force-merge already. We need to flush
// a new segment to make merge happen so that we can verify that all _recovery_source are pruned.
if (globalCheckpoint.get() == engine.getLocalCheckpoint() && liveDocs.isEmpty() == false) {
String deleteId = randomFrom(liveDocs);
engine.delete(new Engine.Delete("test", deleteId, newUid(deleteId), primaryTerm.get()));
liveDocsWithSource.remove(deleteId);
liveDocs.remove(deleteId);
engine.flush();
// If we already merged down to 1 segment, then the next force-merge will be a noop. We need to add an extra segment to make
// merges happen so we can verify that _recovery_source are pruned. See: https://github.com/elastic/elasticsearch/issues/41628.
final int numSegments;
try (Engine.Searcher searcher = engine.acquireSearcher("test", Engine.SearcherScope.INTERNAL)) {
numSegments = searcher.getDirectoryReader().leaves().size();
}
if (numSegments == 1) {
boolean useRecoverySource = randomBoolean() || omitSourceAllTheTime;
ParsedDocument doc = testParsedDocument("dummy", null, testDocument(), B_1, null, useRecoverySource);
engine.index(indexForDoc(doc));
if (useRecoverySource == false) {
liveDocsWithSource.add(doc.id());
}
engine.syncTranslog();
globalCheckpoint.set(engine.getLocalCheckpoint());
engine.flush(randomBoolean(), true);
} else {
globalCheckpoint.set(engine.getLocalCheckpoint());
engine.syncTranslog();
}
globalCheckpoint.set(engine.getLocalCheckpoint());
engine.syncTranslog();
engine.forceMerge(true, 1, false, false, false);
assertConsistentHistoryBetweenTranslogAndLuceneIndex(engine, mapperService);
assertThat(readAllOperationsInLucene(engine, mapperService), hasSize(liveDocsWithSource.size()));
Expand Down

0 comments on commit 28fcf20

Please sign in to comment.