Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segment Replication - Release incorrectly retained commits on primary shards #6660

Merged
merged 3 commits into from
Mar 14, 2023

Conversation

mch2
Copy link
Member

@mch2 mch2 commented Mar 14, 2023

This change ensures that primary shards clean up any state when a replica is marked out of sync. This can happen when replicas fail due to store corruption or mismatching segments during file copy.

Description

This change ensures that primary shards clean up any state when a replica is marked out of sync. This can happen when replicas fail due to store corruption or mismatching segments during file copy.

Issues Resolved

closes #6578

Check List

  • New functionality includes testing.
    • All tests pass
  • New functionality has been documented.
    • New functionality has javadoc added
  • Commits are signed per the DCO using --signoff
  • Commit changes are listed out in CHANGELOG.md file (See: Changelog)

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

@mch2 mch2 changed the title Segment Replication - Release incorrectly retained index commits on p… Segment Replication - Release incorrectly retained commits on primary shards Mar 14, 2023
@mch2 mch2 marked this pull request as ready for review March 14, 2023 02:21
@github-actions

This comment was marked as outdated.

…rimary shards

This change ensures that primary shards clean up any state when a replica is marked
out of sync. This can happen when replicas fail due to store corruption or mismatching segments
during file copy.

Signed-off-by: Marc Handalian <[email protected]>
@github-actions
Copy link
Contributor

Gradle Check (Jenkins) Run Completed with:

@@ -254,8 +260,22 @@ private void cancelHandlers(Predicate<? super SegmentReplicationSourceHandler> p
.filter(predicate)
.map(SegmentReplicationSourceHandler::getAllocationId)
.collect(Collectors.toList());
logger.trace(() -> new ParameterizedMessage("Cancelling replications for allocationIds {}", allocationIds));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: As cancellation is an unexpected event, I think we can log this with warn or debug level.

Copy link
Member Author

@mch2 mch2 Mar 14, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This method is a helper that is used in multiple spots - both for cancellations on node drop and when a local primary is shutting down, this is why I thought trace was more appropriate, though these are valuable logs, will change to warn.

Comment on lines +275 to +279
cancelHandlers(
(handler) -> handler.getCopyState().getShard().shardId().equals(shardId)
&& inSyncAllocationIds.contains(handler.getAllocationId()) == false,
"Shard is no longer in-sync with the primary"
);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Can we add a unit test validating cancellation ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added

refresh(INDEX_NAME);
}
// Refresh, this should trigger round of segment replication
assertBusy(() -> { assertDocCounts(docCount, replicaNode); });
Copy link
Member

@dreamer-89 dreamer-89 Mar 14, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think irrespective of whether replica has different segment files, this test will pass ?. Can we assert on a different allocationID on shard to ensure shard indeed failed on replica ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added this.

@dreamer-89
Copy link
Member

Gradle Check (Jenkins) Run Completed with:

Not sure why this failed. Let's see current gradle check output.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':server:forbiddenApisMain'.
> de.thetaphi.forbiddenapis.ForbiddenApiException: Check for forbidden API calls failed, see log.

@github-actions
Copy link
Contributor

Gradle Check (Jenkins) Run Completed with:

@codecov-commenter
Copy link

codecov-commenter commented Mar 14, 2023

Codecov Report

Merging #6660 (f76dc57) into main (9e1f9ad) will increase coverage by 0.13%.
The diff coverage is 37.50%.

📣 This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more

@@             Coverage Diff              @@
##               main    #6660      +/-   ##
============================================
+ Coverage     70.61%   70.74%   +0.13%     
- Complexity    59071    59177     +106     
============================================
  Files          4803     4803              
  Lines        283192   283208      +16     
  Branches      40837    40842       +5     
============================================
+ Hits         199981   200368     +387     
+ Misses        66804    66441     -363     
+ Partials      16407    16399       -8     
Impacted Files Coverage Δ
...s/replication/SegmentReplicationSourceService.java 52.11% <0.00%> (-7.57%) ⬇️
...ndices/replication/OngoingSegmentReplications.java 92.40% <85.71%> (-0.66%) ⬇️

... and 522 files with indirect coverage changes

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

mch2 added 2 commits March 13, 2023 20:35
Signed-off-by: Marc Handalian <[email protected]>
Signed-off-by: Marc Handalian <[email protected]>
@github-actions

This comment was marked as outdated.

@github-actions
Copy link
Contributor

Gradle Check (Jenkins) Run Completed with:

  • RESULT: UNSTABLE ❕
  • TEST FAILURES:
      1 org.opensearch.cluster.allocation.AwarenessAllocationIT.testThreeZoneOneReplicaWithForceZoneValueAndLoadAwareness

@mch2 mch2 merged commit 73a2279 into opensearch-project:main Mar 14, 2023
@mch2 mch2 added the backport 2.x Backport to 2.x branch label Mar 14, 2023
opensearch-trigger-bot bot pushed a commit that referenced this pull request Mar 14, 2023
… shards (#6660)

* Segment Replication - Release incorrectly retained index commits on primary shards

This change ensures that primary shards clean up any state when a replica is marked
out of sync. This can happen when replicas fail due to store corruption or mismatching segments
during file copy.

Signed-off-by: Marc Handalian <[email protected]>

* PR feedback.

Signed-off-by: Marc Handalian <[email protected]>

* spotless fix.

Signed-off-by: Marc Handalian <[email protected]>

---------

Signed-off-by: Marc Handalian <[email protected]>
(cherry picked from commit 73a2279)
Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
dreamer-89 pushed a commit that referenced this pull request Mar 14, 2023
… shards (#6660) (#6661)

* Segment Replication - Release incorrectly retained index commits on primary shards

This change ensures that primary shards clean up any state when a replica is marked
out of sync. This can happen when replicas fail due to store corruption or mismatching segments
during file copy.



* PR feedback.



* spotless fix.



---------


(cherry picked from commit 73a2279)

Signed-off-by: Marc Handalian <[email protected]>
Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
@mch2 mch2 deleted the insync branch March 14, 2023 06:10
mingshl pushed a commit to mingshl/OpenSearch-Mingshl that referenced this pull request Mar 24, 2023
… shards (opensearch-project#6660)

* Segment Replication - Release incorrectly retained index commits on primary shards

This change ensures that primary shards clean up any state when a replica is marked
out of sync. This can happen when replicas fail due to store corruption or mismatching segments
during file copy.

Signed-off-by: Marc Handalian <[email protected]>

* PR feedback.

Signed-off-by: Marc Handalian <[email protected]>

* spotless fix.

Signed-off-by: Marc Handalian <[email protected]>

---------

Signed-off-by: Marc Handalian <[email protected]>
Signed-off-by: Mingshi Liu <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport 2.x Backport to 2.x branch skip-changelog
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[BUG] [Segment Replication] Shard failures on node stop/restart
3 participants