-
Notifications
You must be signed in to change notification settings - Fork 281
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"failed to turn off translog retention" after upgrade #651
Comments
Hi @gferrette, I have tried to reproduce the issue with a centos7 server running RPM upgrading from 1.0.2 to 1.7.0. However, I am not able to reproduce such issue with simple data on my ends. From the looks of it, this seems like an issue related to upstream. We would appreciate if you could share more information regarding your setup and logs. Thanks. |
Hello @peterzhuamazon ! Thanks for replying. This issue seems to be the same of this thread https://github.com/opendistro-for-elasticsearch/security/issues/354, but in my case it's happening on several indexes and not only on audit. #action.destructive_requires_name: true script.painless.regex.enabled: true #repositorio snapshot ######## Start OpenDistro for Elasticsearch Security Demo Configuration ######## WARNING: revise all the lines below before you go into productionopendistro_security.ssl.transport.pemcert_filepath: dummy.pem
opendistro_security.enable_snapshot_restore_privilege: true More logs info: [2020-08-17T15:57:03,094][INFO ][o.e.g.GatewayService ] [machine] recovered [27] indices into cluster_state |
Hi @gferrette after discussing with the team, we think this issue is more related to the security repo as there are already similar issues to this one. We will transfer this issue to the security repo. Thanks |
Hi @gferrette , |
Hello @dinusX ! Follow below the stack trace with DEBUG log level: [2020-08-18T10:52:54,720][DEBUG][o.e.c.s.MasterService ] [machine] publishing cluster state version [482] Thanks in advance! |
From the above logs it seems that you have an index ".tasks" that is failing during ES process boot-up.
|
Hello @dinusX ! Thanks for replying. This error is occurring in several indexes, not only in .tasks. The .tasks index is in green state, but i have removed this index .tasks, because it is recreated as ES needs it. The error continues on other indexes as below: [2020-08-18T17:53:34,022][DEBUG][o.e.c.s.ClusterApplierService] [machine] processing [Publication{term=24, version=635}]: execute All the indexes that this error is ocurring are in green state. |
If I'm not mistaken , the following commit should fix your warning messages: elastic/elasticsearch#57063 This was fixed in ES 7.7.1+ From the description it doesn't seem to be a bug, but just an unnecessary warning message. |
Hello @dinusX ! It seems it's only a warning message according to this thread. Thanks for your help and for clarifying our questions! |
Hello,
After upgrade Opendistro from version 1.0.2 to version 1.7.0, on nodes startup, its appearing on the logs the message below:
[2020-08-14T10:54:21,893][WARN ][o.e.i.s.IndexShard ] [machine] [.tasks][0] failed to turn off translog retention
org.apache.lucene.store.AlreadyClosedException: engine is closed
at org.elasticsearch.index.shard.IndexShard.getEngine(IndexShard.java:2528) ~[elasticsearch-7.6.1.jar:7.6.1]
at org.elasticsearch.index.shard.IndexShard.trimTranslog(IndexShard.java:1106) ~[elasticsearch-7.6.1.jar:7.6.1]
at org.elasticsearch.index.shard.IndexShard$3.doRun(IndexShard.java:1944) [elasticsearch-7.6.1.jar:7.6.1]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:692) [elasticsearch-7.6.1.jar:7.6.1]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.6.1.jar:7.6.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:830) [?:?]
This message is appearing for several indices, but the indices/shards are not corrupted and they are on green state, but on every startup those messages are appearing on the logs.
Is there any way to solve this issue?
Thanks in advance.
Gabriel.
The text was updated successfully, but these errors were encountered: