-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Logstash fails to restart after changing queue.page_capacity
value
#7581
Labels
Comments
I think we should just accept a page size change which will actually apply on the newly created pages, all existing pages will eventually get purged - I don't see the need to impose a purge process. I will go ahead and see if we can submit a simple change to that effect. |
colinsurprenant
added a commit
to colinsurprenant/logstash
that referenced
this issue
Nov 8, 2017
colinsurprenant
added a commit
to colinsurprenant/logstash
that referenced
this issue
Nov 8, 2017
PR in #8628 |
colinsurprenant
added a commit
to colinsurprenant/logstash
that referenced
this issue
Nov 13, 2017
tests for capacity chage and page and queue level remove dead STRICT_CAPACITY and remove unused @param comment
colinsurprenant
added a commit
to colinsurprenant/logstash
that referenced
this issue
Nov 13, 2017
tests for capacity chage and page and queue level remove dead STRICT_CAPACITY and remove unused @param comment
colinsurprenant
added a commit
that referenced
this issue
Nov 13, 2017
tests for capacity chage and page and queue level remove dead STRICT_CAPACITY and remove unused @param comment
Fixed by #8628 and will be included in the 6.1.0 release. |
insukcho
pushed a commit
to insukcho/logstash
that referenced
this issue
Feb 1, 2018
tests for capacity chage and page and queue level remove dead STRICT_CAPACITY and remove unused @param comment
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
queue.type: persisted
TLDR; changing
queue.page_capacity
may lead to logstash being unable to start. The situation depends on whether any events were currently stored in on-disk queue. If you try to rollqueue.page_capacity
value change across logstash cluster you may end up breaking number of nodes. While reverting change will fix broken nodes it might break new ones.Per @colinsurprenant request, followup for #7538 (rest of comment copy pasted)
queue.page_capacity
size on logstash instance processing data,256mb
,512mb
and1024mb
) as test for it.And then situation was as follows:
page_capacity
size change, it was POSSIBLE to restart logstash without any issue. By "nothing" I mean_node/stats
monitoring endpoint showing 0 underpipeline.queue.events
With example change from: 256 to 512mb:
PRIOR to change:
<< actual change >>
However the size of page file remained at it's former value:
END RESULT: Logstash resumed all of the operations properly.
a) reverted
queue.page_capacity
to its former valueb) manually removed contents of
/var/lib/logstash/queue
and let logstash to recreate itPRIOR TO CHANGE:
<< actual change >>
END RESULT: Logstash not starting.
If it's not possible to not enforce
page_capacity
size on already created pages, then maybe it's just worth mentioning in docs that queue needs to be drained first to allow you to change it?The text was updated successfully, but these errors were encountered: