Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do not log warn shard not-available exception in replication #30205

Merged
merged 1 commit into from
Apr 27, 2018

Conversation

dnhatn
Copy link
Member

@dnhatn dnhatn commented Apr 27, 2018

Since #28049, only fully initialized shards are received write requests.
This enhancement allows us to handle all exceptions. In #28571, we
started strictly handling shard-not-available exceptions and tried to
keep the way we report replication errors to users by only reporting if
the error is not shard-not-available exceptions. However, since then we
unintentionally always log warn for all exception. This change restores
to the previous behavior which logs warn only if an exception is not a
shard-not-available exception.

Relates #28049
Relates #28571

Since elastic#28049, only fully initialized shards are received write requests.
This enhancement allows us to handle all exceptions. In elastic#28571, we
started strictly handling shard not-available exceptions and tried to
keep the way we report replication errors to users by only reporting if
the error is not shard-not-available exceptions. However, since then we
unintentionally always log warn for all exception. This change restores
to the previous behavior to log warn only if an exception is not a shard
not-available exception.
@dnhatn dnhatn added >bug :Distributed Indexing/Recovery Anything around constructing a new shard, either from a local or a remote source. v7.0.0 v6.4.0 v6.3.1 labels Apr 27, 2018
@dnhatn dnhatn requested a review from ywelsch April 27, 2018 14:08
Copy link
Contributor

@ywelsch ywelsch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@dnhatn
Copy link
Member Author

dnhatn commented Apr 27, 2018

@elasticmachine test this please.

@dnhatn
Copy link
Member Author

dnhatn commented Apr 27, 2018

Thanks @ywelsch

@dnhatn dnhatn merged commit 9c586a2 into elastic:master Apr 27, 2018
@dnhatn dnhatn deleted the do-not-log-unavailable-shard-ex branch April 27, 2018 20:45
dnhatn added a commit that referenced this pull request Apr 27, 2018
Since #28049, only fully initialized shards are received write requests.
This enhancement allows us to handle all exceptions. In #28571, we
started strictly handling shard-not-available exceptions and tried to
keep the way we report replication errors to users by only reporting if
the error is not shard-not-available exceptions. However, since then we
unintentionally always log warn for all exception. This change restores
to the previous behavior which logs warn only if an exception is not a
shard-not-available exception.

Relates #28049
Relates #28571
dnhatn added a commit that referenced this pull request Apr 27, 2018
Since #28049, only fully initialized shards are received write requests.
This enhancement allows us to handle all exceptions. In #28571, we
started strictly handling shard-not-available exceptions and tried to
keep the way we report replication errors to users by only reporting if
the error is not shard-not-available exceptions. However, since then we
unintentionally always log warn for all exception. This change restores
to the previous behavior which logs warn only if an exception is not a
shard-not-available exception.

Relates #28049
Relates #28571
@dnhatn dnhatn added v6.3.0 and removed v6.3.1 labels May 1, 2018
@jimczi jimczi added v7.0.0-beta1 and removed v7.0.0 labels Feb 7, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>bug :Distributed Indexing/Recovery Anything around constructing a new shard, either from a local or a remote source. v6.3.0 v6.4.0 v7.0.0-beta1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants