-
Notifications
You must be signed in to change notification settings - Fork 8.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kibana stays read only when ES high disk watermark has been exceeded and later gone beneath the limit #13685
Comments
So how do I recover from this? .kibana stays in read only no matter what I do. I have tried to snapshot it, delete it and recover it from snapshot - still read only... |
I just ran into this on a test machine. For the life of me I can't continue putting data in to the cluster. I finally had to blow away all the involved indices. |
i resolved the issue by deleting the .kibana index: |
I just got hit by this. It's not just Kibana, all indexes get locked when the disk threshold is reached and never get unlocked when space is freed. To unlock all indexes manually:
|
Thanks @xose, I just got hit by this again and was able to recover by using the command you suggested :) The problem occurred on all indices, not just the According to the ES logs, the indices was set to read-only due to low disk space on the elasticsearch host. I run a single host with Elasticsearch, Kibana, Logstash dockerized together with some other tools. As this problem affects other indices is think this is more of an Elasticsearch problem and that the problem seen in Kibana is a symptom of another issue. |
This bug is stupid. Can you Unbreak it for now? At least you should display a warning and list a possible solution. It is really stupid for me to look into js error log and find this thread! |
Yes, I did.
…On Sun, Nov 26, 2017 at 11:12 PM Aaron C. de Bruyn ***@***.***> wrote:
@saberkun <https://github.com/saberkun> You can unbreak it by following
the command @xose <https://github.com/xose> posted:
curl -XPUT -H "Content-Type: application/json" https://[YOUR_ELASTICSEARCH_ENDPOINT]:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#13685 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AEpb5RJrhqJ8fK9wxGtNvTZtomMtlqzZks5s6jbBgaJpZM4PBHOW>
.
|
Can you provide additional information? Did you receive an error when running the command? Did the indices unlock and now you're getting a new error message? What error messages are you seeing in your log files now? |
Thanks. It is fixed by the command. I mean yes, I used it to fix the
problem
…On Sun, Nov 26, 2017 at 11:19 PM Aaron C. de Bruyn ***@***.***> wrote:
Can you provide additional information? Did you receive an error when
running the command? Did the indices unlock and now you're getting a new
error message? What error messages are you seeing in your log files now?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#13685 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AEpb5Xn5uJBlzvAyXkAjRPom-OiwJ43Gks5s6jg0gaJpZM4PBHOW>
.
|
+1 |
+1 ELK 6, cleared half the drive still read-only, logstash is allowed to write again, kibana remained read-only Managed to solve the issue with the workaround provided by @xose |
+1, same error for me. |
Same issue for me. Got resolved by solution given by @xose. |
Same here. All hail @xose. |
I just upgraded a single-node cluster from 6.0.0 to 6.1.1 (both ES and Kibana). When I started the services back up, Kibana was throwing:
Same as last time--I had to delete the I didn't run out of space--there's ~92 GB out of 120 GB free on this test machine. The storage location is ZFS and a scrub didn't reveal any data corruption. The only errors in the log appear to be irrelevant:
|
+1 same error in 6.1.2 |
This is a function of Elasticsearch. Per the Elasticsearch error, To revert this for an index you can set More information on this can be found here: https://www.elastic.co/guide/en/elasticsearch/reference/current/disk-allocator.html |
FYI - for anyone still running into this, here's a quick one-liner to fix the indices: It grabs a list of all the indices in your cluster, then for each one it sends the command to make it not read-only. |
I too was doing this until I found @darkpixel 's solution (#13685 (comment)) You can do this setting for _all instead of going one by one. In my case, it takes quite a while to do it for hundreds of indices, while setting on 'all' takes only a few seconds.
|
Thanks a lot for this WA. It's solved problem for me. |
This worked for me. Both commands were needed to get kabana working after a fresh install:
This did not require deleting the .kibana index. Works perfectly now! Source: |
Kibana version: 6.0.0-beta1
Elasticsearch version: 6.0.0-beta1
Server OS version: Ubuntu 16.04.2 LTS
Browser version: Chrome 60.0.3112.90
Browser OS version: Windows 10
Original install method (e.g. download page, yum, from source, etc.): Official tar.gz packages
Description of the problem including expected versus actual behavior:
I'm running a single node Elasticsearch instance, logstash and Kibana. Everything runs on the same host in separate docker containers.
If the high disk watermark is exceeded on the ES host, the following is logged in the elasticsearch log:
When this has occured, changes to the
.kibana
index will of course fail as the index cannot be written to. This can be observed by trying to change any setting under Management->Advanced Settings where a change to i.e. search:queryLanguage fails with the messageConfig: Error 403 Forbidden: blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];
If more disk space now is made available, ES will log that the node has gone under the high watermark:
One would now assume that it would be possible to make changes to Kibana settings but trying to make a settings change still fails with the error message:
Config: Error 403 Forbidden: blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];
Steps to reproduce:
fallocate -l9G largefile
)rm largefile
)The text was updated successfully, but these errors were encountered: