-
Notifications
You must be signed in to change notification settings - Fork 637
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for horizontal pod autoscaling #1676
Conversation
@stanislav-zaprudskiy you might be interested in this |
Patch looks good to me. Most users of awx are probably not deploying constantly so keeping a bunch of web or task pods running does not make sense. It makes sense only when they are used. In theory with this the community can collectively save big on vCPU requested bloats that site idle for the majority of time. |
This would really help me out. We run a lot of jobs on schedule at particular after hours times so increasing the resources available dynamically is really needed |
@dhageman PR looks great, can you rebase and we can get it in |
@dhageman can you give this a rebase? |
* Allow to scale up the operator pods by using the Helm Chart * Add support for horizontal pod autoscaling (#1676) * fix: spec.replicas
SUMMARY
Allows the operator to create horizontal pod autoscaling resources for the web and task deployments.
It does not provide guidance on how to configure the HPA as those details are very cluster and use case specific.
ISSUE TYPE
ADDITIONAL INFORMATION
This code does not change the default behavior of the operator. A horizontal pod autoscaler will not be created unless it is specifically configured.
The minimum configuration required is to set the option
task_max_replicas
orweb_max_replicas
to be an integer greater thantask_replicas
orweb_replicas
respecitively. This should create a HPA using the default values for a HPA which on OpenShift monitors CPU consumption.A good configuration for HPA will require requests, limits, and health probes for the the web and task pods to be implemented.