-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add kafka common fields at the module level #7767
Conversation
@@ -95,25 +97,36 @@ func New(base mb.BaseMetricSet) (mb.MetricSet, error) { | |||
}, nil | |||
} | |||
|
|||
func (m *MetricSet) Fetch() ([]common.MapStr, error) { | |||
func (m *MetricSet) Fetch(r mb.ReporterV2) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
exported method MetricSet.Fetch should have comment or be unexported
Marked in progress because the integration test added for retrieving consumergroup data is not working yet 🤔 |
a1307ce
to
f93a014
Compare
@@ -26,10 +27,12 @@ | |||
|
|||
- name: topic | |||
type: keyword | |||
deprecated: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Kafka is still beta, so we could remove the old entries. Instead we could have a more detailed explanation in the changelog or the breaking changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even being in beta, I'd prefer to keep the old fields in case someone is using them, we can remove them later, I add a comment about this in the changelog.
++ on the shared fields |
"broker": { | ||
"address": "kafka0:9092", | ||
"address": "172.18.0.2:9092", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting, in my laptop the broker address is different for consumergroup and partition metricsets, this is not good for correlations 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using advertised address in both metricsets now, so it can be more easily used for correlations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting, in my laptop the broker address is different for consumergroup and partition metricsets, this is not good for correlations
This is correct behavior. In kafka a broker can fulfill a many tasks. E.g. being the leader of a partition producers write to, or manage a consumer group. Consumer groups can be handled by very different brokers. Yet, the consumer group index used to store committed offsets from consumers is another internal "topic" potentially owned by yet another broker. State is very localized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the problem was that in one metricset we were using the address we use to connect, and on the other the advertised address, so the broker was the same, but different addresses were being collected.
|
||
ms := mbtest.NewReportingMetricSetV2(t, getConfig()) | ||
for retries := 0; retries < 3; retries++ { | ||
err = mbtest.WriteEventsReporterV2(ms, t, "") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added retries here because it seems that consumer group can take its time to appear in metrics.
func startConsumer(t *testing.T, topic string) (io.Closer, error) { | ||
brokers := []string{getTestKafkaHost()} | ||
topics := []string{topic} | ||
return saramacluster.NewConsumer(brokers, "test-group", topics, nil) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I started trying to create and joining a consumer group using sarama
only, but I ended copying parts of sarama-cluster
, so even if it is only for tests I think it worths to add this library.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Number of consumers in a consumer-group are limited by the number of partitions in a topic. If we pre-initialize the topics with a known set of partitions, then we just have to setup N go-routines joining a consumer group. Using saramacluster might be the easier option though :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Going to merge as changelog can still be cleaned up later.
@@ -83,6 +84,8 @@ https://github.com/elastic/beats/compare/v6.4.0...master[Check the HEAD diff] | |||
*Metricbeat* | |||
- Redis `info` `replication.master_offset` has been deprecated in favor of `replication.master.offset`.{pull}7695[7695] | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this newline on purpose?
Move some common fields in kafka metricsets to the module level as proposed by @gingerwizard. This will allow to correlate documents of different events. (cherry picked from commit 4fedeec)
Move some common fields in kafka metricsets to the module level as proposed by @gingerwizard. This will allow to correlate documents of different events. (cherry picked from commit 4fedeec)
Kafka dashboards calculate cardinalities of composed fields, this is expensive in query time, but can be optimized if the composed fields are precalculated. Continues with elastic#7767, needed for elastic#8457.
Kafka dashboards calculate cardinalities of composed fields, this is expensive in query time, but can be optimized if the composed fields are precalculated. Continues with elastic#7767, needed for elastic#8457. (cherry picked from commit 78ef120)
Move some common fields in kafka metricsets to the module level as proposed by @gingerwizard. This will allow to correlate documents of different events.