Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use predictive analytics to activate a pod #197

Open
jeffhollan opened this issue May 15, 2019 · 7 comments
Open

Use predictive analytics to activate a pod #197

jeffhollan opened this issue May 15, 2019 · 7 comments
Labels
Epic help wanted Looking for support from community stale-bot-ignore All issues that should not be automatically closed by our stale bot

Comments

@jeffhollan
Copy link
Member

A very “blue sky” feature but would be amazing to have KEDA look at historic data and patterns for deployments to try to predict when events may be coming in and scale proactively

@aminebizid
Copy link

aminebizid commented May 15, 2019 via email

@jeffhollan jeffhollan added the help wanted Looking for support from community label May 16, 2019
@GokGokalp
Copy link

A very “blue sky” feature but would be amazing to have KEDA look at historic data and patterns for deployments to try to predict when events may be coming in and scale proactively

Especially for RabbitMQ scaler, that would be great. If we can use some additional variables too such as "consumer utilization", "consumer ack" and "delivery" with the "queueLength" variable, maybe we can scale pods in a reactive way?

These days I'm playing with KEDA and testing it on our staging area. And KEDA currently scaling our pods based on the "queueLength" variable. It's great. But our some RabbitMQ consumers doing I/O based operations. If KEDA keeps continuing the scaling as a linear based on the "queueLength" variable, our other I/O services, that in the consumer, are starting to get in the bottleneck.

@turbaszek
Copy link
Contributor

That's a really interesting idea. A few months ago I was doing research about best practices/patterns for autoscaling application and I stumbled on a research paper covering doing autoscaling using a predictive model.

KEDA look at historic data

@jeffhollan what historic data you had in mind? Also, should it be something maintained by KEDA itself or should this be a pluggable solution so anyone can use a custom model / algo?

@stale
Copy link

stale bot commented Nov 27, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale All issues that are marked as stale due to inactivity label Nov 27, 2021
@zroubalik zroubalik added the stale-bot-ignore All issues that should not be automatically closed by our stale bot label Nov 27, 2021
@stale stale bot removed the stale All issues that are marked as stale due to inactivity label Nov 27, 2021
@daniel-yavorovich
Copy link
Contributor

@jeffhollan

We were dreaming about the same thing and developed this kind of solution. It works based on our simple, but working AI model and predicts pretty well.
Maybe it's not the "blue sky" you're looking for, but you should definitely take a look.

PR: #2418

@rwkarg
Copy link
Contributor

rwkarg commented Jun 29, 2022

A hill-climbing algorithm like is used for the CLR Thread Pool could be a candidate for this. It basically will add threads (or instances in this case) and see if it makes a positive impact on backlog, and if not then remove that instance.

It's a reactive as opposed to predictive approach but it may be more generally applicable as opposed to needing to specify the cyclical period to look back on (hourly batch process? daily user load? weekly jobs? one-off events?). It also doesn't require additional storage of historical data.

Given the way KEDA interfaces with HPAs, it would be a bit round-about (needing to manipulate reported metric values to get the desired instance count directly), but that's the interface we have to work with without rewriting a new pod auto-scaler.

@zroubalik
Copy link
Member

zroubalik commented Jun 29, 2022

I'd love to see a generic interface that will be siting between metrics reported from scalers and HPA. There we can "manipulate" the metrics the way we would like to. For example adding more logic in evaluation of metrics from multiple triggers or pluging in some AI/ML model. The only option to "manipulate" metrics that we have today is the fallback feature, which is great but it is not generic enough. Though some generic interface would be much better.

Writing a new pod autoscaler is something I'd like to avoid 😅

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Epic help wanted Looking for support from community stale-bot-ignore All issues that should not be automatically closed by our stale bot
Projects
Status: To Do
Development

No branches or pull requests

8 participants