You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
openjdk version "11.0.13" 2021-10-19
OpenJDK Runtime Environment Temurin-11.0.13+8 (build 11.0.13+8)
OpenJDK 64-Bit Server VM Temurin-11.0.13+8 (build 11.0.13+8, mixed mode)
JVM installation : /usr/share/logstash/jdk/
OS version (uname -a if on a Unix-like system): Linux siem-logstash-1 5.11.0-1028-aws #31~20.04.1-Ubuntu SMP Fri Jan 14 14:37:50 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Description of the problem including expected versus actual behavior:
When we scale down the number of pods in AWS the logstash process quits on error
Steps to reproduce:
Have two replicas of our logstash pods running using : kubectl scale sts k8s-pod-name --replicas 2
Wait until the pods are running.
Scale down the pods using: kubectl scale sts k8s-pod-name --replicas 1
The logstash process quits on error unable to process our input queues into S3 buckets. The ingest are using multiple pipelines input, coming from AWS SQS, then pumping them into AWS S3.
The error:
logEvent.message
The shutdown process appears to be stalled due to busy or blocked plugins. Check the logs for more information.
loggerName org.logstash.execution.ShutdownWatcherExt
** Logs **:
Ruby stack trace around the error:
This one may need some back-and-forth in our discuss forums. Github issues are typically reserved for minimal reproductions of confirmed bugs.
In the warning message (not an error), we see that there are three (or more, it is truncated) threads still doing work.
When a Logstash process receives a shutdown signal, it immediately begins the process of shutting down: first, input plugins are told to stop, so that we prevent more work from getting into the queues. Next, any pipelines that are using a persistent queue and NOT configured to drain that queue are told to stop picking up work from their queues. Finally, we wait for the workers to finish up the work that they have already picked up off of their queue. This is the bit that can take a while and is really dependent on configuration and the shape of your pipeline. It is normal for the shutdown watcher to log a few messages during this timeline.
The first two indicate that two sqs input plugins in your bullish-dev that are waiting for a networking response. I looked over at the plugin repo and found an old bug indicating that it is possible for the AWS client's QueuePoller to internally loop when it doesn't receive any messages, which prevents it from seeing that it should be shut down. I've opened a PR to fix that (logstash-plugins/logstash-input-sqs#65), and will get it merged and released shortly. If this is your problem, we should have a fix out soon
After upgrading to the logstash-input-sqs 3.3.2 plugin (released today), Logstash stops within seconds instead of staying stuck forever! 🥳
That being said, we still have the "The shutdown process appears to be stalled due to busy or blocked plugins. Check the logs for more information." error message in our Logstash logs.
Logstash information:
Please include the following information:
7.16.1
docker
)AWS sts / kubernetes
)Plugins installed:
JVM (e.g.
java -version
):java -version
):/usr/share/logstash/jdk/
OS version (
uname -a
if on a Unix-like system):Linux siem-logstash-1 5.11.0-1028-aws #31~20.04.1-Ubuntu SMP Fri Jan 14 14:37:50 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Description of the problem including expected versus actual behavior:
When we scale down the number of pods in AWS the logstash process quits on error
Steps to reproduce:
Have two replicas of our logstash pods running using :
kubectl scale sts k8s-pod-name --replicas 2
Wait until the pods are running.
Scale down the pods using:
kubectl scale sts k8s-pod-name --replicas 1
The logstash process quits on error unable to process our input queues into S3 buckets. The ingest are using multiple pipelines input, coming from AWS SQS, then pumping them into AWS S3.
The error:
loggerName
org.logstash.execution.ShutdownWatcherExt
** Logs **:
Ruby stack trace around the error:
The text was updated successfully, but these errors were encountered: