-
Notifications
You must be signed in to change notification settings - Fork 430
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable more optional configurations for AKS node pools #1706
Conversation
Welcome @meixingdb! |
Hi @meixingdb. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
d54dd10
to
659e118
Compare
/assign @CecileRobertMichon |
@@ -67,6 +67,7 @@ func New(scope ManagedClusterScope) *Service { | |||
} | |||
|
|||
// Reconcile idempotently creates or updates a managed cluster, if possible. | |||
//gocyclo:ignore |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
//gocyclo:ignore |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you please think of a more elegant way to do this ?
// MaxCount - Maximum number of nodes for auto-scaling | ||
// +optional | ||
MaxCount *int32 `json:"maxCount,omitempty"` | ||
|
||
// MinCount - Minimum number of nodes for auto-scaling | ||
// +optional | ||
MinCount *int32 `json:"minCount,omitempty"` | ||
|
||
// EnableAutoScaling - Whether to enable auto-scaler | ||
// +optional | ||
EnableAutoScaling *bool `json:"enableAutoScaling,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// MaxCount - Maximum number of nodes for auto-scaling | |
// +optional | |
MaxCount *int32 `json:"maxCount,omitempty"` | |
// MinCount - Minimum number of nodes for auto-scaling | |
// +optional | |
MinCount *int32 `json:"minCount,omitempty"` | |
// EnableAutoScaling - Whether to enable auto-scaler | |
// +optional | |
EnableAutoScaling *bool `json:"enableAutoScaling,omitempty"` | |
// EnableAutoScaling for AKS | |
// +optional | |
AutoScaling *AutoScaling `json:autoScaling, omitempty` | |
type AutoScaling struct { | |
// MaxCount - Maximum number of nodes for auto-scaling | |
// +kubebuilder:validation:Required | |
MaxCount *int32 `json:"maxCount"` | |
// MinCount - Minimum number of nodes for auto-scaling | |
// +kubebuilder:validation:Required | |
MinCount *int32 `json:"minCount"` | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this will help reduce the validation effort in the webhooks :)
return nil | ||
} | ||
|
||
// ValidateUpdate implements webhook.Validator so a webhook will be registered for the type. | ||
//gocyclo:ignore |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
//gocyclo:ignore |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please think of a more elegant way to do this :)
I think cyclomatic complexity is configured to :
- reduce the complexity of the code
- avoid cognitive burden
- reduce the bugginess in code
If it is breaking it means we should try and improve code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
also pasting my slack comment for reference:
the default gocyclo complexity that gets reported is 30, that’s very high. In general we should aim to be even lower than that (10-20). If the function is becoming too complex, the linter can be a good nudge to refactor things to make the code easier to read and easier to test. A function with > 30 cyclomatic complexity is hard to properly unit test and can lead to more logic / edge case bugs. I’m not saying we should blindly break out a big function into 5 smaller equally complex functions but the linter should be a signal that we need to rethink the code, maybe there are other refactors that can be applied to simplify the logic
azure/scope/managedcontrolplane.go
Outdated
if s.InfraMachinePool.Spec.MaxCount != nil { | ||
agentPoolSpec.MaxCount = s.InfraMachinePool.Spec.MaxCount | ||
} | ||
|
||
if s.InfraMachinePool.Spec.MinCount != nil { | ||
agentPoolSpec.MinCount = s.InfraMachinePool.Spec.MinCount | ||
} | ||
|
||
if s.InfraMachinePool.Spec.EnableAutoScaling != nil { | ||
agentPoolSpec.EnableAutoScaling = s.InfraMachinePool.Spec.EnableAutoScaling | ||
} | ||
|
||
if s.InfraMachinePool.Spec.EnableFIPS != nil { | ||
agentPoolSpec.EnableFIPS = s.InfraMachinePool.Spec.EnableFIPS | ||
} | ||
|
||
if s.InfraMachinePool.Spec.EnableNodePublicIP != nil { | ||
agentPoolSpec.EnableNodePublicIP = s.InfraMachinePool.Spec.EnableNodePublicIP | ||
} | ||
|
||
if s.InfraMachinePool.Spec.NodeLabels != nil { | ||
agentPoolSpec.NodeLabels = s.InfraMachinePool.Spec.NodeLabels | ||
} | ||
|
||
if s.InfraMachinePool.Spec.NodeTaints != nil { | ||
agentPoolSpec.NodeTaints = s.InfraMachinePool.Spec.NodeTaints | ||
} | ||
|
||
if s.InfraMachinePool.Spec.OsDiskType != nil { | ||
agentPoolSpec.OsDiskType = s.InfraMachinePool.Spec.OsDiskType | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need all of these checks ?
azure/scope/managedcontrolplane.go
Outdated
if s.InfraMachinePool.Spec.AvailabilityZones != nil { | ||
agentPoolSpec.AvailabilityZones = s.InfraMachinePool.Spec.AvailabilityZones | ||
} | ||
|
||
if s.InfraMachinePool.Spec.ScaleSetPriority != nil { | ||
agentPoolSpec.ScaleSetPriority = s.InfraMachinePool.Spec.ScaleSetPriority | ||
} | ||
|
||
if s.InfraMachinePool.Spec.MaxPods != nil { | ||
agentPoolSpec.MaxPods = s.InfraMachinePool.Spec.MaxPods | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as above
@meixingdb I think logically everything seems to be fine !! Can you please reduce the number unwanted checks ? |
659e118
to
eb9860f
Compare
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@meixingdb please amend your commit messages to be more descriptive thanks :) |
@LochanRn Could you help me to understand the test failures? For the apidiff, I think the newly added fields in AgentPoolSpec in azure/types.go are all optional? why it's causing issue? |
It's not an issue and the job is meant to fail, if there are changes made to the api the apidiff job fails, which becomes an indicator to the reviewers about changes made to the api. |
@meixingdb will go though your PR by tomorrow, thanks for the changes made 😊 |
Thanks a lot! |
2f8475c
to
76f97a1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oops, apparently some of my comments were pending still. still relevant to the alpha versions since we now have beta as well
api/v1alpha4/types.go
Outdated
// KubeletConfig kubelet configurations of agent nodes. | ||
type KubeletConfig struct { | ||
// CPUManagerPolicy - CPU Manager policy to use. | ||
CPUManagerPolicy *string `json:"cpuManagerPolicy,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this duplicated from exp/api?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks like we need this. it's also in api/v1beta1/types.go without my change.
I tried remove this, but deep copy complained about it
@@ -37,6 +37,87 @@ type AzureManagedMachinePoolSpec struct { | |||
// ProviderIDList is the unique identifier as specified by the cloud provider. | |||
// +optional | |||
ProviderIDList []string `json:"providerIDList,omitempty"` | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove this, do not change v1a3. Handle conversion in the webhook (there’s an existing pattern, you will be able to restore the struct easily
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@alexeldeib could you refer me to the pattern? I think I tried one in the past, but it failed the test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Manually restore data. |
i think that's what you're looking for?
type KubeletConfig struct { | ||
// CPUManagerPolicy - CPU Manager policy to use. | ||
CPUManagerPolicy *string `json:"cpuManagerPolicy,omitempty"` | ||
// CPUCfsQuota - Enable CPU CFS quota enforcement for containers that specify CPU limits. | ||
CPUCfsQuota *bool `json:"cpuCfsQuota,omitempty"` | ||
// CPUCfsQuotaPeriod - Sets CPU CFS quota period value. | ||
CPUCfsQuotaPeriod *string `json:"cpuCfsQuotaPeriod,omitempty"` | ||
// ImageGcHighThreshold - The percent of disk usage after which image garbage collection is always run. | ||
ImageGcHighThreshold *int32 `json:"imageGcHighThreshold,omitempty"` | ||
// ImageGcLowThreshold - The percent of disk usage before which image garbage collection is never run. | ||
ImageGcLowThreshold *int32 `json:"imageGcLowThreshold,omitempty"` | ||
// TopologyManagerPolicy - Topology Manager policy to use. | ||
TopologyManagerPolicy *string `json:"topologyManagerPolicy,omitempty"` | ||
// AllowedUnsafeSysctls - Allowlist of unsafe sysctls or unsafe sysctl patterns (ending in `*`). | ||
// TODO: consider using []string instead of *[]string | ||
AllowedUnsafeSysctls *[]string `json:"allowedUnsafeSysctls,omitempty"` | ||
// FailSwapOn - If set to true it will make the Kubelet fail to start if swap is enabled on the node. | ||
FailSwapOn *bool `json:"failSwapOn,omitempty"` | ||
// ContainerLogMaxSizeMB - The maximum size (e.g. 10Mi) of container log file before it is rotated. | ||
ContainerLogMaxSizeMB *int32 `json:"containerLogMaxSizeMB,omitempty"` | ||
// ContainerLogMaxFiles - The maximum number of container log files that can be present for a container. The number must be ≥ 2. | ||
ContainerLogMaxFiles *int32 `json:"containerLogMaxFiles,omitempty"` | ||
// PodMaxPids - The maximum number of processes per pod. | ||
PodMaxPids *int32 `json:"podMaxPids,omitempty"` | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
type KubeletConfig struct { | |
// CPUManagerPolicy - CPU Manager policy to use. | |
CPUManagerPolicy *string `json:"cpuManagerPolicy,omitempty"` | |
// CPUCfsQuota - Enable CPU CFS quota enforcement for containers that specify CPU limits. | |
CPUCfsQuota *bool `json:"cpuCfsQuota,omitempty"` | |
// CPUCfsQuotaPeriod - Sets CPU CFS quota period value. | |
CPUCfsQuotaPeriod *string `json:"cpuCfsQuotaPeriod,omitempty"` | |
// ImageGcHighThreshold - The percent of disk usage after which image garbage collection is always run. | |
ImageGcHighThreshold *int32 `json:"imageGcHighThreshold,omitempty"` | |
// ImageGcLowThreshold - The percent of disk usage before which image garbage collection is never run. | |
ImageGcLowThreshold *int32 `json:"imageGcLowThreshold,omitempty"` | |
// TopologyManagerPolicy - Topology Manager policy to use. | |
TopologyManagerPolicy *string `json:"topologyManagerPolicy,omitempty"` | |
// AllowedUnsafeSysctls - Allowlist of unsafe sysctls or unsafe sysctl patterns (ending in `*`). | |
// TODO: consider using []string instead of *[]string | |
AllowedUnsafeSysctls *[]string `json:"allowedUnsafeSysctls,omitempty"` | |
// FailSwapOn - If set to true it will make the Kubelet fail to start if swap is enabled on the node. | |
FailSwapOn *bool `json:"failSwapOn,omitempty"` | |
// ContainerLogMaxSizeMB - The maximum size (e.g. 10Mi) of container log file before it is rotated. | |
ContainerLogMaxSizeMB *int32 `json:"containerLogMaxSizeMB,omitempty"` | |
// ContainerLogMaxFiles - The maximum number of container log files that can be present for a container. The number must be ≥ 2. | |
ContainerLogMaxFiles *int32 `json:"containerLogMaxFiles,omitempty"` | |
// PodMaxPids - The maximum number of processes per pod. | |
PodMaxPids *int32 `json:"podMaxPids,omitempty"` | |
} | |
// KubeletConfig kubelet configurations of agent nodes. | |
type KubeletConfig struct { | |
// CPUManagerPolicy - CPU Manager policy to use. | |
// +kubebuilder:validation:Enum=none;static | |
// +optional | |
CPUManagerPolicy *string `json:"cpuManagerPolicy,omitempty"` | |
// CPUCfsQuota - Enable CPU CFS quota enforcement for containers that specify CPU limits. | |
// +optional | |
CPUCfsQuota *bool `json:"cpuCfsQuota,omitempty"` | |
// CPUCfsQuotaPeriod - Sets CPU CFS quota period value. | |
// +optional | |
CPUCfsQuotaPeriod *string `json:"cpuCfsQuotaPeriod,omitempty"` | |
// ImageGcHighThreshold - The percent of disk usage after which image garbage collection is always run. | |
// +optional | |
// +kubebuilder:validation:Maximum=100 | |
// +kubebuilder:validation:Minimum=0 | |
ImageGcHighThreshold *int32 `json:"imageGcHighThreshold,omitempty"` | |
// ImageGcLowThreshold - The percent of disk usage before which image garbage collection is never run. | |
// +optional | |
// +kubebuilder:validation:Maximum=100 | |
// +kubebuilder:validation:Minimum=0 | |
ImageGcLowThreshold *int32 `json:"imageGcLowThreshold,omitempty"` | |
// TopologyManagerPolicy - Topology Manager policy to use. | |
// +kubebuilder:validation:Enum=none;est-effort;restricted;single-numa-node | |
// +optional | |
TopologyManagerPolicy *string `json:"topologyManagerPolicy,omitempty"` | |
// AllowedUnsafeSysctls - Allowlist of unsafe sysctls or unsafe sysctl patterns (ending in `*`). | |
// +optional | |
// +kubebuilder:validation:Enum=None;kernel.shm*;kernel.msg*;kernel.sem;fs.mqueue.*;net.* | |
AllowedUnsafeSysctls *[]string `json:"allowedUnsafeSysctls,omitempty"` | |
// FailSwapOn - If set to true it will make the Kubelet fail to start if swap is enabled on the node. | |
// +optional | |
FailSwapOn *bool `json:"failSwapOn,omitempty"` | |
// ContainerLogMaxSizeMB - The maximum size (e.g. 10Mi) of container log file before it is rotated. | |
// +optional | |
ContainerLogMaxSizeMB *int32 `json:"containerLogMaxSizeMB,omitempty"` | |
// ContainerLogMaxFiles - The maximum number of container log files that can be present for a container. The number must be ≥ 2. | |
// +optional | |
ContainerLogMaxFiles *int32 `json:"containerLogMaxFiles,omitempty"` | |
// PodMaxPids - The maximum number of processes per pod. | |
// +optional | |
PodMaxPids *int32 `json:"podMaxPids,omitempty"` | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These changes to be made in v1beta1 😅
azure/types.go
Outdated
AvailabilityZones []string `json:"availabilityZones,omitempty"` | ||
|
||
// ScaleSetPriority - ScaleSetPriority to be used to specify virtual machine scale set priority. Default to regular. Possible values include: 'Spot', 'Regular' | ||
// +optional |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// +optional |
azure/types.go
Outdated
ScaleSetPriority *string `json:"scaleSetPriority,omitempty"` | ||
|
||
// MaxPods - Maximum number of pods that can run on a node. | ||
// +optional |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// +optional |
azure/types.go
Outdated
MaxPods *int32 `json:"maxPods,omitempty"` | ||
|
||
// KubeletConfig - KubeletConfig specifies the configuration of kubelet on agent nodes. | ||
// +optional |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// +optional |
// If AutoScaling is enabled, both MinCount and MaxCount should be set | ||
if r.Spec.AutoScaling != nil && (r.Spec.AutoScaling.MaxCount == nil || r.Spec.AutoScaling.MinCount == nil || (*r.Spec.AutoScaling.MaxCount < *r.Spec.AutoScaling.MinCount)) { | ||
allErrs = append(allErrs, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we validating again for MinCount and MaxCount ?
Kubebuilder already does that right ?
// MaxCount - Maximum number of nodes for auto-scaling
// +kubebuilder:validation:Required
MaxCount *int32 `json:"maxCount"`
// MinCount - Minimum number of nodes for auto-scaling
// +kubebuilder:validation:Required
MinCount *int32 `json:"minCount"`
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are most of the fields immutable, is there any documentation which you have followed ??
// NodeTaints - Taints added to new nodes during node pool create and scale. For example, key=value:NoSchedule. | ||
// +optional | ||
NodeTaints []string `json:"nodeTaints,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For this field, can we add a regex to validate the strings ?
if !reflect.DeepEqual(r.NodeTaints, old.NodeTaints) { | ||
updateErrs = append(updateErrs, | ||
field.Invalid( | ||
field.NewPath("Spec", "NodeTaints"), | ||
r.NodeTaints, | ||
"field is immutable")) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
afaik taints on a kubernetes node are mutable, please correct me if i am wrong !!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AKS limitation today: https://docs.microsoft.com/en-us/azure/aks/use-multiple-node-pools#setting-nodepool-taints (see the box a bit below that link)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks 😊
e5773fd
to
56180d2
Compare
|
56180d2
to
9eac953
Compare
9eac953
to
f8181c4
Compare
@meixingdb: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
@meixingdb: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
//dst.Spec.NodeTaints = []string{} | ||
//dst.Spec.AvailabilityZones = []string{} | ||
|
||
if restored.Spec.NodeLabels != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@alexeldeib @CecileRobertMichon
Can I get some helps on the conversions for Annotations/NodeLabels/NodeTaints/AvailabilityZones/KubeletConfig?
I've tried many combinations, none of them worked. Always got errors like this:
- Annotations: nil,
+ Annotations: map[string]string{},
OwnerReferences: nil,
Finalizers: nil,
... // 2 identical fields
},
Spec: v1beta1.AzureManagedMachinePoolSpec{
... // 10 identical fields
NodeTaints: {"Lȃ$ʩʧȧɰʉ透酠v"},
VnetSubnetID: nil,
- AvailabilityZones: []string{},
+ AvailabilityZones: nil,
ScaleSetPriority: &"{",
MaxPods: nil,
KubeletConfig: &v1beta1.KubeletConfig{
... // 4 identical fields
ImageGcLowThreshold: &-622828595,
TopologyManagerPolicy: &"lJÑĚʤp",
- AllowedUnsafeSysctls: &nil,
+ AllowedUnsafeSysctls: nil,
&v1beta1.AzureManagedMachinePool{
TypeMeta: {},
ObjectMeta: {Name: "鍥唕ʡ", GenerateName: "CȡN阗綬é", Namespace: "|Zt(Ƌ濦¾UQ僷轘v刢#", SelfLink: ">藓Bƕ", ...},
Spec: v1beta1.AzureManagedMachinePoolSpec{
... // 10 identical fields
NodeTaints: nil,
VnetSubnetID: &"疒¯蟸酈ƫ搳ćDƤ株茀]喋",
- AvailabilityZones: []string{},
+ AvailabilityZones: nil,
ScaleSetPriority: &"{X慿é渇霓渉ƶ´墜2",
MaxPods: &599523910,
KubeletConfig: &v1beta1.KubeletConfig{
... // 4 identical fields
ImageGcLowThreshold: nil,
TopologyManagerPolicy: nil,
- AllowedUnsafeSysctls: &nil,
+ AllowedUnsafeSysctls: nil,
FailSwapOn: nil,
ContainerLogMaxSizeMB: &145186574,
... // 2 identical fields
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First you need to remove the new types from v1alpha4. Then, both alpha3 <-> beta1 and alpha4 <-> beta1 needs to be added. For new fields that were added, we want to restore v1beta1 types if they exist (this saves information in the case of a round trip conversion). If the field is a pointer, we only want to set it if it was previously set, for example:
if restored.Spec.KubeletConfig != nil {
dst.Spec.KubeletConfig = restored.Spec.KubeletConfig
}
@@ -698,3 +698,29 @@ type AzureBastion struct { | |||
func IsTerminalProvisioningState(state ProvisioningState) bool { | |||
return state == Failed || state == Succeeded | |||
} | |||
|
|||
// KubeletConfig kubelet configurations of agent nodes. | |||
type KubeletConfig struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this looks like it's a duplicate of the type in exp/? If not, where is it used?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rather than define our own KubeletConfig
type spec, can we simply add a new property to the AgentPoolSpec
:
Something like this:
$ git diff
diff --git a/azure/types.go b/azure/types.go
index 45bb67f1..e0422643 100644
--- a/azure/types.go
+++ b/azure/types.go
@@ -19,6 +19,7 @@ package azure
import (
"reflect"
+ "github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-05-01/containerservice"
"github.com/google/go-cmp/cmp"
infrav1 "sigs.k8s.io/cluster-api-provider-azure/api/v1beta1"
@@ -441,4 +442,7 @@ type AgentPoolSpec struct {
// AvailabilityZones represents the Availability zones for nodes in the AgentPool.
AvailabilityZones []string
+
+ // KubeletConfig kubelet configurations of agent nodes.
+ KubeletConfig containerservice.KubeletConfig
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
discussed this with @jackfrancis offline, we don't actually want to do this, because it would make us rely on the SDK not making breaking changes and prevents us from using conversion for back compat (and also ties our API to a specific SDK version)
@@ -56,6 +56,116 @@ type AzureManagedMachinePoolSpec struct { | |||
// ProviderIDList is the unique identifier as specified by the cloud provider. | |||
// +optional | |||
ProviderIDList []string `json:"providerIDList,omitempty"` | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should not be modifying v1alpha4 types. Only v1beta1 types can be updated in the main branch
@meixingdb what do you think about breaking up this big PR into little small PRs, one for each feature that is getting added? this one is very big which makes it difficult to merge quickly. There are also several fields in here that have already been added in the main branch with separate PR which is causing merge conflicts. Examples of smaller PRs to add AKS types: #1669, #1564, #1910 |
I agree with this and makes more sense !! |
The comments in thread have requested the work in this PR be broken into multiple smaller feature additions. I believe the amount of feedback and the push for smaller feature additions will make it difficult to continue reviewing this PR. I'm going to close this PR, not due to lack of interest or appreciation for these additions, but for the aforementioned feedback. Thank you, @meixingdb. /close |
@devigned: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Issue: #1701
/kind feature
Describe the solution you'd like
[A clear and concise description of what you want to happen.]
Add optional configurations for users to specify specs for AKS node pools. The list of options include:
// AutoScaling config
type AutoScaling struct {
// MaxCount - Maximum number of nodes for auto-scaling
// +kubebuilder:validation:Required
MaxCount *int32
json:"maxCount"
}
// EnableNodePublicIP - Enable public IP for nodes
// +optional
EnableNodePublicIP
// EnableFIPS - Whether to use FIPS enabled OS
// +optional
EnableFIPS
// OsDiskType - OS disk type to be used for machines in a given agent pool. Allowed values are 'Ephemeral' and 'Managed'. If unspecified, defaults to 'Ephemeral' when the VM supports ephemeral OS and has a cache disk larger than the requested OSDiskSizeGB. Otherwise, defaults to 'Managed'. May not be changed after creation. Possible values include: 'Managed', 'Ephemeral'
// +optional
OsDiskType
// NodeLabels - Agent pool node labels to be persisted across all nodes in agent pool.
// +optional
NodeLabels
// NodeTaints - Taints added to new nodes during node pool create and scale. For example, key=value:NoSchedule.
// +optional
NodeTaints
// VnetSubnetID - VNet SubnetID specifies the VNet's subnet identifier for nodes and maybe pods
// +optional
VnetSubnetID
// AvailabilityZones - Availability zones for nodes. Must use VirtualMachineScaleSets AgentPoolType.
// +optional
AvailabilityZones
// ScaleSetPriority - ScaleSetPriority to be used to specify virtual machine scale set priority. Default to regular. Possible values include: 'Spot', 'Regular'
// +optional
ScaleSetPriority
// MaxPods - Maximum number of pods that can run on a node.
// +optional
MaxPods
// KubeletConfig - KubeletConfig specifies the configuration of kubelet on agent nodes.
// +optional
KubeletConfig
What type of PR is this?
What this PR does / why we need it:
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes # #1701
Special notes for your reviewer:
Please confirm that if this PR changes any image versions, then that's the sole change this PR makes.
TODOs:
Release note:
Nodepool options can be enabled
Kubeletconfig can be optionally set