-
Notifications
You must be signed in to change notification settings - Fork 131
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add support for topology spread constraint for VMs #1445
add support for topology spread constraint for VMs #1445
Conversation
/retest |
go.mod
Outdated
@@ -57,7 +57,7 @@ require ( | |||
k8s.io/klog v1.0.0 | |||
k8s.io/kubelet v0.24.2 | |||
k8s.io/utils v0.0.0-20220728103510-ee6ede2d64ed | |||
kubevirt.io/api v0.54.0 | |||
kubevirt.io/api v0.57.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could you import v0.57.1?
@@ -87,6 +90,13 @@ type NodeAffinityPreset struct { | |||
Values []providerconfigtypes.ConfigVarString `json:"values,omitempty"` | |||
} | |||
|
|||
// TopologySpreadConstraint. | |||
type TopologySpreadConstraint struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add comments to fields. I know we didn't do that in the past but it's time to change this.
@@ -75,7 +76,9 @@ type Disk struct { | |||
|
|||
// Affinity. | |||
type Affinity struct { | |||
PodAffinityPreset providerconfigtypes.ConfigVarString `json:"podAffinityPreset,omitempty"` | |||
// Deprecated | |||
PodAffinityPreset providerconfigtypes.ConfigVarString `json:"podAffinityPreset,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please add explanation why it's deprecated now. Should be
// Deprecated: <description>
We deprecate that field because KubeVirt supports now topology spread constraints.
/retest |
@sankalp-r do you consider this PR ready or are you still working on it? |
PR is ready from my perspective. |
/retest |
@sankalp-r tomorrow in the morning I will do another review round. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve
/lgtm
LGTM label has been added. Git tree hash: af793510b385c1fde474dd8d4aca06c73c9c1eb3
|
@sankalp-r as I understand, prior to this change if somebody uses PodAffinity then later when recreating VM it will have applied topology spread constraints, right? Please don't forget to test this behaviour that migration is smooth. |
@mfranczy Before this change assume if we have a MachineDeployments with PodAffnity, corresponding VMs will also have those. |
Signed-off-by: Sankalp Rangare <[email protected]>
/approve |
LGTM label has been added. Git tree hash: aa6fb7661701a05ba146403ffc32d3e03f1bb2bd
|
/retest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ahmedwaleedmalik, mfranczy, sankalp-r The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Signed-off-by: Sankalp Rangare <[email protected]> Signed-off-by: Sankalp Rangare <[email protected]>
What this PR does / why we need it:
This PR adds support for
TopologySpreadConstraint
for VMs which will spread user cluster nodes across infra-cluster nodes. And removes the Pod affinity/anti-affinity mechanism.Which issue(s) this PR fixes:
Fixes kubermatic/kubermatic#10646
What type of PR is this?
/kind feature
Special notes for your reviewer:
Does this PR introduce a user-facing change? Then add your Release Note here:
Documentation:
Signed-off-by: Sankalp Rangare [email protected]