-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Integrations]: Kafka support for Observability #774
Comments
Here's a brief overview of the Kafka exploration done so far: Prerequisite
Kafka Logs
Kafka Statistics
JConsole:
Advantage:
Disadvantage:
We explored various options available in Jcosole however discarded use of jconsole since it didn't provide a programming interface. JmxTrans:https://github.com/jmxtrans/jmxtrans/wiki
Advantage:
Disadvantage:
Burrow:Yet to be explored. Prometheus:Yet to be explored |
Statistics via JMXTrans:The following section describes how we have accessed metrices using JMXTrans as well as what all metrices are available to query. Metrices Availability:As mentioned in the previous post, the metrices to be monitored for Kafka/Zookeeper are well documented. JConsole provides all the relevant MBeans to be queried in it's GUI. We configured JConsole and got the name of the MBeans to be queried from there. The below snapshot shows the same: Note that, for the analysis we have done, we took a sample of the available MBeans and tried querying them. JMXTrans Configuration:Once we had the MBeans to be queried, we created config files in JMXTRans which would query the relevant JVM instance for metrices. These are simple json/yaml files which contains elements/nodes to query a MBeans inside a JVM instance.
The config section also allows you to redirect the metrices output to a bunch of writers. Some of these write the metrices in files while others could forward them to different applications, for example: Graphite. Some could also stream the metrices over UDP. We gathered the metrices and stored them into a simple file using the KeyOutWriter as shown in the config file above. Integration with FluentD & OpensearchThe file so created via JMXTrans was integrated into Opensearch using FluentD. We have stopped analysis on Kafka/ZooKeeper at this juncture and are awaiting further instructions |
PrometheusWe are trying to evaluate Prometheus for collecting statistics as it supports integration with a lot of applications. Output from Configured Endpoint:FluentD is not able to read the file which has default output from endpoints. Prom2json:Since the file created from output of default endpoint was not readable in FluentD, Prom2json reads the default endpoint for Prometheus and translates the output to Json. HTTP API:Prometheus provides rest APIs to query data and get result in json. Remote Write API:This could be used to store data from Prometheus to a remote timeseries database such as Influx DB. Kindly let us know if:
|
Prometheus is an entire ecosystem of well written and stable tools. In my opinion, the best approach is reuse it as much as possible. What I see as good integration point between Prometheus and OpenSearch is using OpenSearch as a remote storage back end. Then, observability can take advantage of a central point for logs, tracing and metrics. Data correlation. |
Hello
This enhancement request following feature:
@abasatwar @spattnaik
The text was updated successfully, but these errors were encountered: