-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
⚠️ Add strict validation for CIDR ranges specified in Clusters #7420
⚠️ Add strict validation for CIDR ranges specified in Clusters #7420
Conversation
@marvinWolff - it would be great to get your input on this. |
flake? /retest |
@jayunit100 PTAL if you have some time. We are discussing validation rules for CIDR in ClusterNetwork (ServiceCIDRs and Node/PodsCIDR) |
19fa536
to
03f6846
Compare
03f6846
to
cfe6ea5
Compare
I've updated the implementation to match that done by kubeadm: The following is validated in the webhook:
This is done using the Cluster's This change could be breaking for some providers if they were using these values for something other than passing them directly to Kubernetes e.g. informing an IP pool for some IPAM CNI, so I think we might need to be careful with including this change. That may be the case in #7358 The base problem is that validation on this field is done differently depending on whether or not users are using ClusterClass. If it's better to leave webhook validation as is, then validation should no longer be done when computing variables on the ClusterClass to make it consistent. |
/hold For more input from providers on the blast radius of this change |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor comments, rest lgtm
b05d675
to
6849902
Compare
We've discussed during the Cluster API office hours meeting on Oct 19th and decided by consensus that we should upgrade this PR to a @killianmuldoon will send an email to the mailing list. |
Hello @killianmuldoon, sorry for the late reply. This looks good to me. Validating what kubernetes expects is a good idea to prevent creating clusters with a basically invalid configuration like we did. We also solved our issue in the meantime by specifiying the whole ip-range and using calico ip-pools. |
Thanks @marvinWolff can you share which providers you were using with the original config? Just wondering how the original version you had was working initially. |
I did some digging into how Kubernetes validates these IP ranges (and which components use which) across the core components. The conclusion is that these ranges are assumed to be either a single entry or dualstack if there are two entries. Supplying more than two CIDR ranges is not supported. With this in mind I think the validation approach in this PR is correct. Details: Kube ProxyClusterCIDR from flag Kube APIServerServiceClusterIPRanges from flag Kube Controller ManagerClusterCIDR from flag ServiceCIDR from flag KubeletPodCIDR from flag Kube SchedulerNo CIDR related flags. |
/hold Want to bring this up one more time at office hours today - we can merge after that. |
as discussed in the community meeting on 9 November, we'd like to get more comments on this before merging. please review the defaults as described in #7521 (comment) , these will be encoded by this PR. if there are no additional comments or objections by 11 November, we will merge this. |
@killianmuldoon Can you rebase when you have some time? |
8808d54
to
a1e0ee5
Compare
a1e0ee5
to
ed74c64
Compare
Thank you very much!! /lgtm |
/assign @fabriziopandini |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the fact that we are not making IpFamily a first-class concern given that it is CAPD specific.
I have one last question on validation to make sure we are not blocking use cases allowed upstream (like the one in the attached issue).
internal/webhooks/cluster.go
Outdated
@@ -443,3 +470,40 @@ func machineDeploymentClassOfName(clusterClass *clusterv1.ClusterClass, name str | |||
} | |||
return nil | |||
} | |||
|
|||
// validateCIDRBlocks validates that: | |||
// 1) No more than two CIDR blocks are specified. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a Kubernetes limitation that applies both to service and pod cidr?
same for 2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If not we are not sure I will simply validate all the CIDR are valid, without making assumptions on the combination of them in this first iteration
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both
(see #7420 (comment))
/remove-hold |
ed74c64
to
84ee36d
Compare
84ee36d
to
0d1bed7
Compare
/lgtm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one nit, otherwise lgtm
Signed-off-by: killianmuldoon <[email protected]>
0d1bed7
to
bef7d72
Compare
Thx! /lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: sbueringer The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Signed-off-by: killianmuldoon [email protected]
Add a check in the Cluster webhook to ensure each CIDR block only contains valid CIDR blocks.
/kind bug
Fixes: #7358