Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

smoother scaling (predictive scaling) #2401

Open
aslom opened this issue Dec 13, 2021 · 10 comments
Open

smoother scaling (predictive scaling) #2401

aslom opened this issue Dec 13, 2021 · 10 comments
Labels
feature-request All issues for new features that have not been committed to needs-discussion stale-bot-ignore All issues that should not be automatically closed by our stale bot

Comments

@aslom
Copy link

aslom commented Dec 13, 2021

Proposal

Currently KEDA scalers do not have an easy way to predict scaling targets and keep necessary history of measurements

Use-Case

Kafka scaler scales number of Kafka consumers. Each new scaling triggers Kafka rebalancing as Kafka broker needs to re-assign consumers to Kafka topic partitions and that can take 10 seconds or longer. During rebalancing events can not be consumed and that leads to jarring experience when scaling is repeated (as events are not consumed during re-balancing) with scaling going 1 -> 2 -> 4 -> 8 -> 16 -> ... (up to number of partitions - large topics may have hundreds of paritions)

Anything else?

The best way to explore the issue may be to build quick prototype for Kafka scaler and explore how generic prediction/history interface can be?

@aslom aslom added feature-request All issues for new features that have not been committed to needs-discussion labels Dec 13, 2021
@zroubalik
Copy link
Member

Yeah, we should try to implement this kind of stuff in Metrics Server, this is relevant issue that needs to be tackled first: #2282

For reference adding a link to HPA docs and it's scaling behavior configuration, this could help with mitigating and configuring the smoothnes of the scaling as well: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#configurable-scaling-behavior

@VerstraeteBert
Copy link
Contributor

I would love to see this as well in the future. A solid predictive scaling mechanic could turn KEDA into a very powerful tool, where users don't need to worry about tweaking any of the scaling parameters. e.g., a method that takes into account the number of events coming in per timespan vs historic data on the number of events processed per active replica in the same timespan. One can dream right?

Supplementary use case: serverless platforms wanting to offer fully transparent autoscaling to their users.

@tomkerkhove
Copy link
Member

Relates to #197

@daniel-yavorovich
Copy link
Contributor

@aslom

We had similar thoughts, but not specifically in the Kafka context, and created our scaler based on AI model. Take a look at how it can perform, maybe it will work for you too.

PR: #2418

@tomkerkhove
Copy link
Member

Ok if we close this issue in favor of #197 @aslom?

@aslom
Copy link
Author

aslom commented Feb 1, 2022

@tomkerkhove I would liek to keep it open as I started a simple non-AI version and testing AI version to see side-by-side how they work for Kafka - my intuition is that simple version may be good for predictability and/or use together with AI version

@zroubalik
Copy link
Member

Yeah, this is a little bit different approach. It would use just a short window of the last couple of metrics to do the calculation.

@stale
Copy link

stale bot commented Apr 4, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale All issues that are marked as stale due to inactivity label Apr 4, 2022
@aslom
Copy link
Author

aslom commented Apr 4, 2022

Still looking into it.

@stale stale bot removed the stale All issues that are marked as stale due to inactivity label Apr 4, 2022
@aslom
Copy link
Author

aslom commented May 13, 2022

/remove-lifecycle stale

@tomkerkhove tomkerkhove added the stale-bot-ignore All issues that should not be automatically closed by our stale bot label May 16, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature-request All issues for new features that have not been committed to needs-discussion stale-bot-ignore All issues that should not be automatically closed by our stale bot
Projects
Status: Proposed
Development

No branches or pull requests

5 participants