Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kafka scaler: concurrent offset fetches #2405

Merged
merged 11 commits into from
Jan 3, 2022
Merged

Kafka scaler: concurrent offset fetches #2405

merged 11 commits into from
Jan 3, 2022

Conversation

VerstraeteBert
Copy link
Contributor

@VerstraeteBert VerstraeteBert commented Dec 17, 2021

Concurrently query brokers for consumer and producer offsets

Checklist

  • Commits are signed with Developer Certificate of Origin (DCO - learn more)
  • Tests have been added
  • A PR is opened to update our Helm chart (repo) (if applicable, ie. when deployment manifests are modified)
  • A PR is opened to update the documentation on (repo) (if applicable)
  • Changelog has been updated

Fixes #2377.

Signed-off-by: VerstraeteBert [email protected]

@VerstraeteBert VerstraeteBert requested a review from a team as a code owner December 17, 2021 12:35
Copy link
Member

@zroubalik zroubalik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good, I wonder if we can cover this with some test.

@pierDipi @lionelvillard @matzew mind looking?

@zroubalik
Copy link
Member

@bpinske PTAL

@bpinske
Copy link
Contributor

bpinske commented Dec 20, 2021

Looking good, I wonder if we can cover this with some test.

It looks like the kafka e2e tests already include multiple partitions. So I think this should be covered by existing tests. https://github.com/kedacore/keda/blob/main/tests/scalers/kafka.test.ts#L242

@VerstraeteBert
Copy link
Contributor Author

VerstraeteBert commented Dec 21, 2021

Thanks for the feedback on my initial work by the way @bpinske

Found out that there is a goroutine leak when an error is thrown, fixed it. Tests don't cover the failure scenario, however, I do think it's sound now. A second opinion would be great though.

@bpinske
Copy link
Contributor

bpinske commented Dec 21, 2021

The logic looks correct to me. As long as the e2e tests continue to work, I think this is a great improvement.

Signed-off-by: VerstraeteBert <[email protected]>
@PaulLiang1
Copy link
Contributor

Looking good, I wonder if we can cover this with some test.

It looks like the kafka e2e tests already include multiple partitions. So I think this should be covered by existing tests. https://github.com/kedacore/keda/blob/main/tests/scalers/kafka.test.ts#L242

while i was working on #2409 , i had to setup the kafka e2e test locally, and it appears it runs a single node broker for the test.

for this change, would it be beneficial to cover multi broker scenario?

@pierDipi
Copy link

Looking good, I wonder if we can cover this with some test.

It looks like the kafka e2e tests already include multiple partitions. So I think this should be covered by existing tests. https://github.com/kedacore/keda/blob/main/tests/scalers/kafka.test.ts#L242

while i was working on #2409 , i had to setup the kafka e2e test locally, and it appears it runs a single node broker for the test.

for this change, would it be beneficial to cover multi broker scenario?

+1

@tomkerkhove
Copy link
Member

Great to see a contribution from somebody in my neighborhood (also live near Bruges 😄 )

Signed-off-by: VerstraeteBert <[email protected]>
@VerstraeteBert
Copy link
Contributor Author

VerstraeteBert commented Dec 22, 2021

Thanks for peeking @pierDipi, @PaulLiang1 . Have changed the the Strimzi cluster CR to spawn 3 brokers.

@tomkerkhove I see you're a howest alumni as well :-), cool! Could you trigger the e2e tests for me?

Signed-off-by: VerstraeteBert <[email protected]>
@JorTurFer
Copy link
Member

JorTurFer commented Dec 22, 2021

/run-e2e kafka.test*
Update: You can check the progres here

pkg/scalers/kafka_scaler.go Outdated Show resolved Hide resolved
pkg/scalers/kafka_scaler.go Outdated Show resolved Hide resolved
@JorTurFer
Copy link
Member

Hi @VerstraeteBert ,
Please sign you last commit, when you accept suggestions in GitHub, as default they are not signed :(
https://github.com/kedacore/keda/blob/main/CONTRIBUTING.md#i-didnt-sign-my-commit-now-what

VerstraeteBert and others added 2 commits December 22, 2021 14:06
Co-authored-by: Jorge Turrado Ferrero <[email protected]>
Signed-off-by: VerstraeteBert <[email protected]>
Signed-off-by: VerstraeteBert <[email protected]>
@VerstraeteBert
Copy link
Contributor Author

VerstraeteBert commented Dec 23, 2021

Might as well immediately concurrently fetch the consumer and producer offsets, while we're optimizing the metric generation. The I/O latency with the last commit changes from consumerT + ~max(producerTs) to ~max(consumerT, producerTs) .

What do you guys think about this change?

should I let getProducerOffsets and getConsumerOffsets take a resultChannel as a parameter instead since they are only used within a goroutine wrapper now?

@bpinske
Copy link
Contributor

bpinske commented Dec 23, 2021

What do you guys think about this change?

I like it. The fewer round trips we need to wait on, the better.

should I let getProducerOffsets and getConsumerOffsets take a resultChannel as a parameter instead since they are only used within a goroutine wrapper now?

Personally I think it would be best to keep the function structured as you have. It may make future other feature improvements easier if concurrency isn't too tightly coupled to the logic of collecting offsets.

@VerstraeteBert VerstraeteBert changed the title Concurrent Kafka broker partition offset queries Kafka scaler: concurrent offset fetches Dec 24, 2021
@VerstraeteBert
Copy link
Contributor Author

VerstraeteBert commented Dec 24, 2021

What do you guys think about this change?

I like it. The fewer round trips we need to wait on, the better.

should I let getProducerOffsets and getConsumerOffsets take a resultChannel as a parameter instead since they are only used within a goroutine wrapper now?

Personally I think it would be best to keep the function structured as you have. It may make future other feature improvements easier if concurrency isn't too tightly coupled to the logic of collecting offsets.

Sounds good, thanks for the feedback @bpinske .

@zroubalik zroubalik added this to the v2.6.0 milestone Jan 3, 2022
Copy link
Member

@zroubalik zroubalik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks a lot @VerstraeteBert

@zroubalik zroubalik merged commit 82ec31f into kedacore:main Jan 3, 2022
zroubalik pushed a commit to zroubalik/keda that referenced this pull request Jan 4, 2022
zroubalik pushed a commit that referenced this pull request Jan 4, 2022
alex60217101990 pushed a commit to dysnix/keda that referenced this pull request Jan 10, 2022
Signed-off-by: VerstraeteBert <[email protected]>
Signed-off-by: alex60217101990 <[email protected]>
alex60217101990 pushed a commit to dysnix/keda that referenced this pull request Jan 10, 2022
Signed-off-by: VerstraeteBert <[email protected]>
Signed-off-by: alex60217101990 <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Kafka scaler should concurrently query brokers and partitions for their message offsets
7 participants