Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add support for topology spread constraint for VMs #1445

Merged
merged 1 commit into from
Sep 30, 2022
Merged

add support for topology spread constraint for VMs #1445

merged 1 commit into from
Sep 30, 2022

Conversation

sankalp-r
Copy link
Contributor

What this PR does / why we need it:
This PR adds support for TopologySpreadConstraint for VMs which will spread user cluster nodes across infra-cluster nodes. And removes the Pod affinity/anti-affinity mechanism.

Which issue(s) this PR fixes:

Fixes kubermatic/kubermatic#10646

What type of PR is this?

/kind feature

Special notes for your reviewer:

Does this PR introduce a user-facing change? Then add your Release Note here:

Add support for topology spread constraint for KubeVirt VirtualMachines.

Documentation:

TBD

Signed-off-by: Sankalp Rangare [email protected]

@sankalp-r sankalp-r added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Sep 26, 2022
@kubermatic-bot kubermatic-bot added kind/feature Categorizes issue or PR as related to a new feature. release-note Denotes a PR that will be considered when it comes time to generate release notes. dco-signoff: yes Denotes that all commits in the pull request have the valid DCO signoff message. sig/cluster-management Denotes a PR or issue as being assigned to SIG Cluster Management. sig/osm Denotes a PR or issue as being assigned to SIG OSM. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Sep 26, 2022
@sankalp-r sankalp-r requested a review from mfranczy September 26, 2022 13:37
@sankalp-r sankalp-r added the sig/virtualization Denotes a PR or issue as being assigned to SIG Virtualization. label Sep 26, 2022
@sankalp-r
Copy link
Contributor Author

/retest

go.mod Outdated
@@ -57,7 +57,7 @@ require (
k8s.io/klog v1.0.0
k8s.io/kubelet v0.24.2
k8s.io/utils v0.0.0-20220728103510-ee6ede2d64ed
kubevirt.io/api v0.54.0
kubevirt.io/api v0.57.0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you import v0.57.1?

@@ -87,6 +90,13 @@ type NodeAffinityPreset struct {
Values []providerconfigtypes.ConfigVarString `json:"values,omitempty"`
}

// TopologySpreadConstraint.
type TopologySpreadConstraint struct {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add comments to fields. I know we didn't do that in the past but it's time to change this.

@@ -75,7 +76,9 @@ type Disk struct {

// Affinity.
type Affinity struct {
PodAffinityPreset providerconfigtypes.ConfigVarString `json:"podAffinityPreset,omitempty"`
// Deprecated
PodAffinityPreset providerconfigtypes.ConfigVarString `json:"podAffinityPreset,omitempty"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please add explanation why it's deprecated now. Should be
// Deprecated: <description>

We deprecate that field because KubeVirt supports now topology spread constraints.

@sankalp-r
Copy link
Contributor Author

/retest

@mfranczy
Copy link
Contributor

@sankalp-r do you consider this PR ready or are you still working on it?

@sankalp-r
Copy link
Contributor Author

@sankalp-r do you consider this PR ready or are you still working on it?

PR is ready from my perspective.

@sankalp-r sankalp-r changed the title WIP: add support for topology spread constraint for VMs add support for topology spread constraint for VMs Sep 27, 2022
@kubermatic-bot kubermatic-bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Sep 27, 2022
@sankalp-r
Copy link
Contributor Author

/retest

@mfranczy
Copy link
Contributor

@sankalp-r tomorrow in the morning I will do another review round.

Copy link
Contributor

@mfranczy mfranczy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/approve
/lgtm

@kubermatic-bot kubermatic-bot added the lgtm Indicates that a PR is ready to be merged. label Sep 28, 2022
@kubermatic-bot
Copy link
Contributor

LGTM label has been added.

Git tree hash: af793510b385c1fde474dd8d4aca06c73c9c1eb3

@mfranczy
Copy link
Contributor

mfranczy commented Sep 28, 2022

@sankalp-r as I understand, prior to this change if somebody uses PodAffinity then later when recreating VM it will have applied topology spread constraints, right? Please don't forget to test this behaviour that migration is smooth.

@kubermatic-bot kubermatic-bot removed the lgtm Indicates that a PR is ready to be merged. label Sep 29, 2022
@sankalp-r
Copy link
Contributor Author

sankalp-r commented Sep 29, 2022

@sankalp-r as I understand, prior to this change if somebody uses PodAffinity then later when recreating VM it will have applied topology spread constraints, right? Please don't forget to test this behaviour that migration is smooth.

@mfranczy Before this change assume if we have a MachineDeployments with PodAffnity, corresponding VMs will also have those.
After this change, the existing MD and corresponding VM stay the same, meaning PodAffnity spec remains in the existing VM.
But if now we try to scale the existing old MD to more replicas, new VMs will be created and they will not have PodAffnity(because it is ignored), but now they will have default topology constraints. The old existing MD object will still persist those Affinity specs until we remove them.

@mfranczy
Copy link
Contributor

/approve
/lgtm

@kubermatic-bot kubermatic-bot added the lgtm Indicates that a PR is ready to be merged. label Sep 29, 2022
@kubermatic-bot
Copy link
Contributor

LGTM label has been added.

Git tree hash: aa6fb7661701a05ba146403ffc32d3e03f1bb2bd

@sankalp-r
Copy link
Contributor Author

/retest

Copy link
Member

@ahmedwaleedmalik ahmedwaleedmalik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/approve

@kubermatic-bot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: ahmedwaleedmalik, mfranczy, sankalp-r

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@kubermatic-bot kubermatic-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 30, 2022
@kubermatic-bot kubermatic-bot merged commit 5819bb5 into kubermatic:master Sep 30, 2022
mate4st pushed a commit to mate4st/machine-controller that referenced this pull request Mar 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. dco-signoff: yes Denotes that all commits in the pull request have the valid DCO signoff message. kind/feature Categorizes issue or PR as related to a new feature. lgtm Indicates that a PR is ready to be merged. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/cluster-management Denotes a PR or issue as being assigned to SIG Cluster Management. sig/osm Denotes a PR or issue as being assigned to SIG OSM. sig/virtualization Denotes a PR or issue as being assigned to SIG Virtualization. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support topology spread constraints
4 participants