Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix ref count handling in Engine.failEngine #48639

Merged
merged 2 commits into from
Oct 29, 2019

Conversation

original-brownbear
Copy link
Member

We can run into an already closed store here and hence
throw on trying to increment the ref count => moving to
the guarded ref count increment

closes #48625

We can run into an already closed store here and hence
throw on trying to increment the ref count => moving to
the guarded ref count increment

closes elastic#48625
@original-brownbear original-brownbear added >non-issue :Distributed Indexing/Recovery Anything around constructing a new shard, either from a local or a remote source. v8.0.0 v7.6.0 labels Oct 29, 2019
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-distributed (:Distributed/Recovery)

@@ -490,7 +490,6 @@ public void testIndexAndRelocateConcurrently() throws Exception {
docs[i] = client().prepareIndex("test").setId(id).setSource("field1", English.intToEnglish(numDocs + i));
}
indexRandom(true, docs);
numDocs *= 2;
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this line is just dead code we're never touching numDocs again in this test :)

Copy link
Member

@dnhatn dnhatn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@original-brownbear
Copy link
Member Author

original-brownbear commented Oct 29, 2019

Thanks @dnhatn ! Sorry for pushing
b60b437 didn't see your review. Can you maybe take another look though, I think that fix is better/safer as it still goes through all the close steps even if the store got closed concurrently?
Seems like this fix would also give a more accurate warnings and avoid needlessly messing with the ref-count when we're not dealing with a corruption?

Copy link
Member

@dnhatn dnhatn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. The new fix is better. Thanks @original-brownbear for an extra iteration :)

@original-brownbear original-brownbear merged commit 4b89171 into elastic:master Oct 29, 2019
@original-brownbear original-brownbear deleted the 48625 branch October 29, 2019 17:22
original-brownbear added a commit to original-brownbear/elasticsearch that referenced this pull request Oct 29, 2019
We can run into an already closed store here and hence
throw on trying to increment the ref count => moving to
the guarded ref count increment

closes elastic#48625
original-brownbear added a commit that referenced this pull request Oct 30, 2019
We can run into an already closed store here and hence
throw on trying to increment the ref count => moving to
the guarded ref count increment

closes #48625
dnhatn pushed a commit that referenced this pull request Nov 3, 2019
We can run into an already closed store here and hence
throw on trying to increment the ref count => moving to
the guarded ref count increment

closes #48625
@dnhatn dnhatn added the v7.5.0 label Nov 3, 2019
@dnhatn
Copy link
Member

dnhatn commented Nov 3, 2019

I have backported this PR to 7.5 since #48414 needs it.

@mfussenegger mfussenegger mentioned this pull request Mar 26, 2020
37 tasks
mfussenegger added a commit to crate/crate that referenced this pull request Apr 30, 2020
mergify bot pushed a commit to crate/crate that referenced this pull request Apr 30, 2020
@original-brownbear original-brownbear restored the 48625 branch August 6, 2020 18:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Distributed Indexing/Recovery Anything around constructing a new shard, either from a local or a remote source. >non-issue v7.5.0 v7.6.0 v8.0.0-alpha1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[CI] RelocationIT.testIndexAndRelocateConcurrently fails on master intake
4 participants