-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support exposing scalable resources to Cluster Autoscaler in ClusterClass #5442
Comments
This should have common design considerations with #5125 |
/area topology |
/milestone Next |
/assign @randomvariable |
i like this idea, but considering the MHC cluster class proposal as well it starts to make me wonder if we should have a more generic mechanism for allowing the user to add components that get installed on a per-cluster basis. we have had users ask about this type of feature in the past and i wonder if we would see a way for users to include arbitrary manifests (or references) in their cluster class which could be deployed after the core cluster is running? |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
i think we should keep this open, but i'm not clear on the next steps. would it be appropriate to draft a proposal for this? /remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
i still think this is a nice idea, not sure how the rest of the community feels. is this worth bringing up again at a meeting or should we consider rolling it into the lifecycle hooks stuff? (cc @enxebre ) /remove-lifecycle rotten |
/lifecycle frozen |
Regarding the above points:
|
that's kinda what i was wondering too
agreed, i'm not sure either. i had thought we decided /not/ to include autoscaler while we had the discussion during the clusterclass enhancement process. i wonder if this issue needs updating to fit with the changes we have proposed more recently? |
Fabrizio wrote an interesting comment here: #5532 (comment) |
Hello! And what is the current state of this issue? The lack of autoscaling abilities seems like a big stopper for migrating to ClusterClass and managed topologies |
@MaxFedotov as you can tell we haven't discussed this in the past month or so. i know the group is split about whether ClusterClass should expose a way to deploy the cluster autoscaler, and we have decided not to include this functionality currently. with that said, you could add the minimum and maximum scaling annotation to the MachineDeployment or MachineSets defined in the ClusterClass. that would at least give the ability for the autoscaler to detect those scalable resources, you would just be responsible for deploying the autoscaler itself. |
we had a conversation about this on slack, it has some interesting details about problems that users might find. https://kubernetes.slack.com/archives/C8TSNPY4T/p1658302795387519 |
Performed some test using CAPD. apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: capd
spec:
clusterNetwork:
pods:
cidrBlocks:
- 192.168.0.0/16
serviceDomain: cluster.local
services:
cidrBlocks:
- 10.128.0.0/12
topology:
class: quick-start
controlPlane:
metadata: {}
replicas: 1
variables:
- name: imageRepository
value: k8s.gcr.io
- name: etcdImageTag
value: ""
- name: coreDNSImageTag
value: ""
- name: podSecurityStandard
value:
audit: restricted
enabled: true
enforce: baseline
warn: restricted
version: v1.24.0
workers:
machineDeployments:
- class: default-worker
name: md-0
replicas: 1 you won't be able to scale them using:
or
They will always go to values specified in
But if your apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: capi-quickstart
namespace: default
spec:
clusterNetwork:
pods:
cidrBlocks:
- 192.168.0.0/16
serviceDomain: cluster.local
services:
cidrBlocks:
- 10.128.0.0/12
topology:
class: quick-start
controlPlane:
metadata: {}
variables:
- name: imageRepository
value: k8s.gcr.io
- name: etcdImageTag
value: ""
- name: coreDNSImageTag
value: ""
- name: podSecurityStandard
value:
audit: restricted
enabled: true
enforce: baseline
warn: restricted
version: v1.24.0
workers:
machineDeployments:
- class: default-worker
name: md-0 Then a Many thanks to @elmiko and @killianmuldoon for helping to understand how to deal with this issue. |
As per comment above this is already possible, |
@fabriziopandini: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
User Story
As a cluster operator I would like to be able to enable Cluster Autoscaling for some node pools in managed topologies
Detailed Description
ClusterClass provides a way to define a "stamp" to be used for creating many Clusters with a similar shape.
It would be great to have a way to expose scalable resources i.e MachineDeployments to the Cluster Autoscaler in the ClusterClass, so it will be automatically included in the generated Clusters.
This has two separate parts:
/kind feature
The text was updated successfully, but these errors were encountered: