-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PRR for 2593-multiple-cluster-cidrs #3839
Conversation
Hi @sarveshr7. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/cc @aojea |
- cidrset_usage_cidrs - Gauge messuring the percentage of the provided CIDRs | ||
- multicidrset_cidrs_allocations_total - Count of total number of CIDR allcoations | ||
- multicidrset_cidrs_releases_total - Count of total number of CIDR releases | ||
- multicidrset_usage_cidrs - Gauge messuring the percentage of the provided CIDRs | ||
that have been allocated | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The health of the service can be determined if the *_total metrics increase with the number of nodes and the usage_cidrs
metric is greater or equal than the number of nodes
It is important to mention that the Service is optional, it depends on the CNI implemented in the cluster, you can have the multicidr allocator and be a noOp in the cluster, if the CNI doesn't make use of the information provided
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
I added that it is optional and based on --allocate-node-cidrs
being set to true in kube-controller-manager
I am not sure about the CNI specific behavior.
I believe when --allocate-node-cidrs
is set to true, this controller will still add the PodCIDRs to the Node object. However the CNI can ignore if it manages the ipam?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes,, the field is optional to consume, it doesn't imply the cluster will use it
/ok-to-test |
Great feature! I plan a gist to show how this can be used to solve kubernetes/kubernetes#109814. This feature can be useful for secondary networks when they become supported by K8s. I would like some pro-active preparation for multi-network already now. Perhaps something as simple as a label; apiVersion: networking.k8s.io/v1alpha1
kind: ClusterCIDR
metadata:
name: default
labels:
ipam.kubernetes.io/range-controller: my-net
spec:
perNodeHostBits: 8
ipv4: "11.0.0.0/16"
ipv6: "1100::/112" telling the Node IPAMController to ignore this ClusterCIDR. But I think the work you have already done will be repeated, so a perhaps better way is to allow the current IPAMController to create the ranges in some not-yet-existing network object instead of the node object (will not be called "network" though). |
The KEP should contain a working yaml manifest. Below is the one I am currently using. It took while to figure out, so to save time for other please include something similar; apiVersion: networking.k8s.io/v1alpha1
kind: ClusterCIDR
metadata:
name: ipv4
spec:
nodeSelector:
nodeSelectorTerms:
- matchExpressions:
- {key: wantIPv4, operator: In, values: ["true"]}
perNodeHostBits: 7
ipv4: "172.20.0.0/24"
ipv6: "fd00:1000::/120"
---
apiVersion: networking.k8s.io/v1alpha1
kind: ClusterCIDR
metadata:
name: default
spec:
perNodeHostBits: 16
ipv6: "fc00:2000::/96" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couple more comments from me, but overall it looks reasonable.
@uablrek Thanks for the suggestion. I think @mskrocki is already considering it in the multi network design |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is good for go to Beta.
/approve PRR
/lgtm /assign @thockin |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: aojea, sarveshr7, thockin, wojtek-t The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Adds a PRR for KEP-2593 to graduate to beta