-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DMON Filebeat #1
Comments
We already have the logstash-forwarder recipe. We don't install or use filebeat in the cookbooks used for the deployment of nodes. We don't use it anywhere yet, but this is probably an error. For Storm it is probably not used, but for Spark, should we enable it on the nodes? |
Spark is similar to Storm in that it sends the data directly into dmons logstash server. It doesn't use logstash-forwarder. I don't know how well this scales. One of the requirements is that the monitoring agents should be lightweight in terms of computation so we decided to try to use the graphite data sink in spark. It works well for up to 20 nodes. We don't know how well it scales to thousands of monitored instances. I have tried to collect these metrics via the CSV Sink and standard JMX but they have some drawback if the polling period is less then 10 seconds. The computational overhead is quite substantial. One alternative would be to write a collectd plugin for Spark metrics. I have opened an issue [1] in the DMON repo in order to test this. |
This sounds like it should not be much of a problem to support any of the selected solutions for collecting metrics. If you write a collectd plugin or use any other approach, we will incorporpate it. |
Let's open a specific issue if/when we need to support filebeat. It is not a problem on our side to do that. |
The DMON platform uses logstash-forwarder not filebeat.
The switch from logstash-forwarder to filebeat will be made at some point during the near future, however there are some issues still left to resolv:
Could you provide a logstash-forwarder recipe?
The text was updated successfully, but these errors were encountered: