Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

design-proposal: VirtualMachineInstanceMigration - Live migration to a named node #320

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

tiraboschi
Copy link
Member

@tiraboschi tiraboschi commented Sep 3, 2024

What this PR does / why we need it:
Adding a design proposal to extend VirtualMachineInstanceMigration
object with an additional API to let a cluster admin
try to trigger a live migration of a VM injecting
on the fly and additional NodeSelector constraint.
The additional NodeSelector can only restrict the set
of Nodes that are valid target for the migration
(eventually down to a single host).
All the affinity rules defined on the VM spec are still
going to be satisfied.

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes https://issues.redhat.com/browse/CNV-7075

Special notes for your reviewer:
Something like this was directly proposed/implemented with kubevirt/kubevirt#10712 getting already discussed there.

Checklist

This checklist is not enforcing, but it's a reminder of items that could be relevant to every PR.
Approvers are expected to review this list.

Release note:

design-proposal: VirtualMachineInstanceMigration - Live migration to a named node

@kubevirt-bot kubevirt-bot added dco-signoff: yes Indicates the PR's author has DCO signed all their commits. size/M labels Sep 3, 2024
tiraboschi pushed a commit to tiraboschi/kubevirt that referenced this pull request Sep 3, 2024
Follow-up and derived from:
kubevirt#10712
Implements:
kubevirt/community#320

TODO: add functional tests

Signed-off-by: zhonglin6666 <[email protected]>
Signed-off-by: Simone Tiraboschi <[email protected]>
Copy link
Member

@dankenigsberg dankenigsberg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lovely to see this clear design proposal (even if I don't like anything that assumes a specific node is long-living). I have two questions, though.


## Goals
- A user allowed to trigger a live-migration of a VM and list the nodes in the cluster is able to rely on a simple and direct API to try to live migrate a VM to a specific node.
- The explict migration target overrules a nodeSelector or affinity and anti-affinity rules defined by the VM owner.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find this odd, as the VM and the application in it may not function well (or at all) if affinity is ignored. Can you share more about the origins of this goal? I'd expect the target node to be ANDed with existing anti/affinity rules.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tend to think that as a cluster admin that is trying to force a VM to migrate to named node this is the natural and expected behaviour:
if I explicitly select a named node, I'm expecting that my VM will be eventually migrated there and nowhere else (such as on a different node selected by the scheduler according to a weighted combination of affinity criteria and resource availability and so on); then I can tolerate that the live migration will fail since I chose a wrong node, but the controller should only try to live-migrate it according to what I'm explicitly asking for.
And by the way this is absolutely consistent with the native k8s behaviour for pods.
spec.nodeName for pods is under spec for historical reasons but it's basically controlled by the scheduler:
when a pod is going to be executed, the scheduler is going to check it and, according to available cluster resources, nodeselectors, weighted affinity and anti-affinity rules and so on, it's going to select a node and write it on spec.nodeName on the pod objects. At this point the kubelet on the named node will try to execute the Pod on that node.
If the user explicitly sets spec.nodeName on a pod (or in the template in a deployment and so on), the scheduler is not going to be involved in the process since the pod is basically already scheduled for that node and nothing else and so the kubelet on that node will directly try to execute it there eventually failing.
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodename explictly state:

If the nodeName field is not empty, the scheduler ignores the Pod and the kubelet on the named node tries to place the Pod on that node.
Using nodeName overrules using nodeSelector or affinity and anti-affinity rules.

And this in my opinion is exactly how we should treat a Live migration attempt to a named node.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's take the following example (this is a real world use-case):

  1. An admin is adding a new node to a cluster to take it into prod. This node has a taint to prevent workloads to immediately land there.
  2. The admin wants to migrate a VM to this now to validted it is working properly.

If we AND a new selector for this node, then the migration will not take place, because there is the taint. We'd also need to add a toleration to get the vm scheduled to that node.

With spec.nodeName it would be no issue - initially - it could become one if Require*atRuntime effects are used.
However, with spec.nodeName all other validations - CPU caps, extended, storage, and local resources etc will be ignored. We are asking a VM to not start.
Worse: It would be really hard now to understand WHY the vm is not launching.

Thus I think we have to AND to the node selector, but need code to understand taints specifically (because taints keep workloads away).
Then we still need to think about a generic mechanism to deal with reasons of why a pod can not be placed on the selected node.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not like taking examples from the historically-understandable Pod.spec.nodeName. Node identity is not something that should have typically been exposed to workload owners.

Can you summarize your reasoning into the proposal? I think I understand it now, but I am not at ease with it. For example, a cluster admin may easily violate anti/affinity rules that are important for app availability.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fabiand with taints is a bit more complex: the valid effects for a taint are NoExecute, NoSchedule and PreferNoSchedule.
Bypassing the scheduler directly setting spec.nodeName will allow us to bypass taints with NoSchedule and PreferNoSchedule effect but, AFAIK, it will be still blocked by a NoExecute that is also enforced by the Kubelect with eviction.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dankenigsberg yes, this is a critical aspect of this design proposal so we should carefully explore and weight the different alternatives tracking them down in the design proposal itself as a future reference.

In my opinion the choice strictly depends on the use case and the power we want to offer to the cluster admin when creating a live migration request to a named node.

Directly setting spec.nodeName on the target pod will completely bypass all the scheduling hints (spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution) and constraints (spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution) meaning that the target pod will be started on the named nome regardless how the VM is actually configured.

Another option is trying to append/merge (this sub-topic deserves by itself another discussion) something like

spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchFields:
            - key: metadata.name
              operator: In
              values:
              - <nodeName>

to the affinity rules already defined on the VM.
My concern with this choice is that affinity/anti-affinity grammar is pretty complex so, if the VM owner already defined some affinity/anti-affinity rules, we can easily end up with a set of conflicting rules so that the target pod cannot be scheduled on the named node as on any other node.

If the use case that we want to address is giving to the cluster admin the right to try migrating a generic VM to a named node (for instance for maintenance/emergency reasons), this is approach is not fully addressing it with many possible cases where the only viable option is still about manually overriding affinity/anti-affinity rules set by the VM owner.

I still tend to think that the always bypass the scheduler with a spec.nodeName is the K.I.S.S. approach here if try to forcing a live migration to a named node is exactly what the cluster admin is trying to do.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I summarized this considerations into the proposal itself, let's continue from there.

design-proposals/migration-target.md Outdated Show resolved Hide resolved
design-proposals/migration-target.md Outdated Show resolved Hide resolved

# Implementation Phases
A really close attempt was already tried in the past with https://github.com/kubevirt/kubevirt/pull/10712 but the Pr got some pushbacks.
A similar PR should be reopened, refined and we should implement functional tests.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you outline the nature of the pushback? Do we currently have good answers to the issues raised back then?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Trying to summarize (@EdDev please keep me honest on this), it was somehow considered a semi-imperative approach and it was pointed out that a similar behavior could already indirectly be achieved modifying on the fly and then reverting affinity rules on the VM object.
see: kubevirt/kubevirt#10712 (comment)
and: kubevirt/kubevirt#10712 (comment)

How much this is imperative is questionable: at the end we already have a VirtualMachineInstanceMigration object that you can use to declare that you want to trigger a live migration, this is only about letting you also declare that you want to have a live migration to a named host.

The alternative approach based on amending the affinity rules on the VM object and waiting for the LiveUpdate rollout strategy to propagate it to the VMI before trying a live migration is described, pointing out its main drawback, in the Alternative design section in this proposal.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you inline this succinctly? E.g, that Pr got some pushbacks because it was not clear why a new API for one-off migration is needed. We give here a better explanation why this one-off migration destination request is necessary.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • The "one-time" operation convinced me.
  • The reasoning for the real need is hard for me, but I did feedback on this proposal what is convincing me.

@iholder101
Copy link
Contributor

/cc

- Cluster-admin: the administrator of the cluster

## User Stories
- As a cluster admin I want to be able to try to live-migrate a VM to specific node for maintenance reasons eventually overriding what the VM owner set
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to see more fleshed out user stories. It's unclear to me based on these user stories why the existing methods wouldn't suffice.

As a cluster admin I want to be able to try to live-migrate a VM to specific node for maintenance reasons eventually overriding what the VM owner set

For example, why wouldn't the cluster admin taint the source node and live migrate the vms away using the existing methods? Why would the admin need direct control over the exact node the VM goes to? I'd like to see a solid answer for why this is necessary over existing methods.

That's where this discussion usually falls apart and why it hasn't seen progress through the years. I'm not opposed to this feature, but I do think we need to articulate clearly why the feature is necessary

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I expanded this section

Comment on lines 38 to 67
## User Stories
- As a cluster admin I want to be able to try to live-migrate a VM to specific node for various possible reasons such as:
- I just added to the cluster a new powerful node and I want to migrate a selected VM there without trying more than once according to scheduler decisions
- I'm not using any automatic workload rebalancing mechanism and I periodically want to manually rebalance my cluster according to my observations
- Foreseeing a peak in application load (e.g. new product announcement), I'd like to balance in advance my cluster according to my expectation and not to current observations
- During a planned maintenance window, I'm planning to drain more than one node in a sequence, so I want to be sure that the VM is going to land on a node that is not going to be drained in a near future (needing then a second migration) and being not interested in cordoning it also for other pods
- I just added a new node and I want to validate it trying to live migrate a specific VM there
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice! these are good reasons that hadn't been explored during previous discussions, thanks

When a pod is going to be executed, the scheduler is going to check it and, according to available cluster resources, nodeselectors, weighted affinity and anti-affinity rules and so on,
the scheduler is going to select a node and write its name on `spec.nodeName` on the pod object. At this point the kubelet on the named node will try to execute the Pod on that node.

If `spec.nodeName` is already set on a pod object as in this approach, the scheduler is not going to be involved in the process since the pod is basically already scheduled for that node and only for tha named node and so the kubelet on that node will directly try to execute it there eventually failing.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think using pod.spec.nodeName is likely the most straightforward approach. This does introduce some new failure modes that might not be obvious to admins.

For example, today if a target pod is unschedulable due to lack of resources, the migration object will time out due to the pod being stuck in "pending". This information is feed back to admin as an k8s event associated with the migration object.

However, by setting the pod.spec.NodeName directly, we'd be bypassing the checks that ensure the required resources are available on the node (like the node having the "kvm" device available for instance), and the pod would likely get scheduled and immediately fail. I don't think we are currently bubbling up these types of errors to the migration object, so this could leave admins wondering why their migration failed.

I guess what I'm trying to get at here is, I like this approach, let's make sure the new failure modes get reported back on the migration object so the Admin has some sort of clue as to why a migration has failed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@davidvossel We already report the failure reason on the VMIM. This is part of the VMIM status.

pod.spec.nodeName entirely bypassed the scheduler making AAQ unusable as it relies on "pod scheduling readiness".

From my pov, bypassing the scheduler is a no go.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From my pov, bypassing the scheduler is a no go.

luckily we have also another option as described on:
### B. appending/merging an additional nodeAffinity rule on the target virt-launcher pod (merging it with VM owner set affinity/anti-affinity rules)

This will add an additional constraint for the scheduler summing it up with existing constraints/hints.
In case of mismatching/oppositing rules, the destination pod will not be scheduled and the migration will fail.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vladikr @davidvossel +1.

spec.nodeName is a horrible field that is not being removed from Kubernetes only due to backward compatibility and causes a lot of trouble. I agree that it should be considered as a no-go.

Copy link
Member

@vladikr vladikr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand the intention behind introducing the nodeName field, but I fail to see how something like this may work at scale. It seems to me that most, if not all, of the user stories listed in the proposal can already be achieved through existing methods. Adding this field could potentially cause confusion for admins and lead to unnecessary friction with the Kubernetes scheduler and descheduler flows. I'd prefer to see solutions to the user stories to be aligned closely with established patterns. (descheduler policies or scheduler plugins )


## User Stories
- As a cluster admin I want to be able to try to live-migrate a VM to specific node for various possible reasons such as:
- I just added to the cluster a new powerful node and I want to migrate a selected VM there without trying more than once according to scheduler decisions
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder what would be so special about these VMs that cannot be handled by a descheduled?
Also, how would the admin know that the said descheduler did not remove these VMs at a later time?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The descheduler it's going to decide according to its internal policy.
In the more general use case it will be a cluster admin who can decide to live migrate a VM just because he thinks it's the right thing to do.

- I'm not using any automatic workload rebalancing mechanism and I periodically want to manually rebalance my cluster according to my observations
- Foreseeing a peak in application load (e.g. new product announcement), I'd like to balance in advance my cluster according to my expectation and not to current observations
- During a planned maintenance window, I'm planning to drain more than one node in a sequence, so I want to be sure that the VM is going to land on a node that is not going to be drained in a near future (needing then a second migration) and being not interested in cordoning it also for other pods
- I just added a new node and I want to validate it trying to live migrate a specific VM there
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could be achieved today by modifying the VM's node selector or creating a new VM. New nodes will be the schedulers' very likely target for new pods already.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right,
from a pure technical perspective this feature can be already simply achieved directly manipulating the node affinity rules on the VM object. Now we have LiveUpdate rollout strategy and so the new affinity rules will be quickly propagated to the VMI and so consumed on the target pod of the live-migration.
No doubt, on the technical side it will work.

But the central idea of this proposal is about allowing a cluster admin doing that without touching the VM object.
This for two maina reasons:

  • separation of personas: the VM owner can set rules on his VM, a cluster admin could be still interested in migrating a VM without messing up or altering the configuration set by the owner on the VM object.
  • separating what it a one-off configuration for the single migration attempt (so set on the VirtualMachineInstanceMigration object) that is relevant only for this single migration attempt but it should not produce any side effect in the future from what is a long-term configuration that is going to stay there and be applied also later on (future live migrations, restarts).

This comment applies to all the user stories here.

## User Stories
- As a cluster admin I want to be able to try to live-migrate a VM to specific node for various possible reasons such as:
- I just added to the cluster a new powerful node and I want to migrate a selected VM there without trying more than once according to scheduler decisions
- I'm not using any automatic workload rebalancing mechanism and I periodically want to manually rebalance my cluster according to my observations
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is also doable today as the default scheduler will try to choose the least busy node to schedule the target pod.

- As a cluster admin I want to be able to try to live-migrate a VM to specific node for various possible reasons such as:
- I just added to the cluster a new powerful node and I want to migrate a selected VM there without trying more than once according to scheduler decisions
- I'm not using any automatic workload rebalancing mechanism and I periodically want to manually rebalance my cluster according to my observations
- Foreseeing a peak in application load (e.g. new product announcement), I'd like to balance in advance my cluster according to my expectation and not to current observations
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please elaborate on this?
How would the cluster look like to the admins' expectations?
Couldn't a taint be placed on some nodes to resolve capacity before the new product announcement?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same, I do not want to argue with an admin on how the cluster should be managed, but this is surely not a recommended way we want to encourage/support.

@tiraboschi
Copy link
Member Author

It seems to me that most, if not all, of the user stories listed in the proposal can already be achieved through existing methods.

Right, I also added this note:

Note

technically all of this can be already achieved manipulating the node affinity rules on the VM object, but as a cluster admin I want to keep a clear boundary between what is a long-lasting setting for a VM, defined by the VM owner, and what is single shot requirement for a one-off migration

@tiraboschi tiraboschi force-pushed the migration_target branch 3 times, most recently from 63818ed to a937ba2 Compare September 6, 2024 16:53
@vladikr
Copy link
Member

vladikr commented Sep 7, 2024

I spoke with @fabiand offline.
Perhaps we can simply copy any provided Affinity and/or Tolerations set but the admin on the VMIM to the target pod -
instead of offering a dedicated API field.

My main concern with this proposal is that it may promote a wrong assumption that manual cluster balancing is preferred instead of relying on the scheduler/descheduler - while this is just a local minimum.

@tiraboschi
Copy link
Member Author

I spoke with @fabiand offline. Perhaps we can simply copy any provided Affinity and/or Tolerations set but the admin on the VMIM to the target pod - instead of offering a dedicated API field.

I think that exposing the whole node affinity/anti-affinity (+ tolerations + ...) grammar on the VirtualMachineInstanceMigration object is by far too much.
At the end, as a cluster admin I want to only to try to migrate that VM to a named node. All the other uses cases are out of scope and should be addressed correctly setting/amending the node affinity on the VM.
I still think that exposing an optional nodeName string on the VirtualMachineInstanceMigration spec is all of what we need to accomplish all the use cases here.

My main concern with this proposal is that it may promote a wrong assumption that manual cluster balancing is preferred instead of relying on the scheduler/descheduler - while this is just a local minimum.

I think it's up to us to emphasize this assumption in the API documentation making absolutely clear that the nodeName field is optional and we recommend to keep it empty to let the scheduler find the best node (if trying to migrate to a specific named node is not strictly needed).

I'm proposing something like:

// NodeName is a request to try to migrate this VMI to a specific node.
// If it is non-empty, the migration controller simply try to configure the target VMI pod to be started onto that node,
// assuming that it fits resource, limits and other node placement constraints; it will override nodeSelector and affinity
// and anti-affinity rules set on the VM.
// If it is empty, recommended, the scheduler becomes responsible for finding the best Node to migrate the VMI to.
// +optional
NodeName string `json:"nodeName,omitempty"`

I'm adding it to this proposal.

@vladikr
Copy link
Member

vladikr commented Sep 9, 2024

I spoke with @fabiand offline. Perhaps we can simply copy any provided Affinity and/or Tolerations set but the admin on the VMIM to the target pod - instead of offering a dedicated API field.

I think that exposing the whole node affinity/anti-affinity (+ tolerations + ...) grammar on the VirtualMachineInstanceMigration object is by far too much. At the end, as a cluster admin I want to only to try to migrate that VM to a named node.

Setting affinity and toleration is exactly what any other user would need to do to allow scheduling a workload on tainted node, not sure why we need to facilitate this in the migration case.
Also, taking this route would not require us to add any new logic to the migration controller.

Generally speaking, Affinity and nodeSector are the most acceptable ways to influence scheduling decisions.

All the other uses cases are out of scope and should be addressed correctly setting/amending the node affinity on the VM. I still think that exposing an optional nodeName string on the VirtualMachineInstanceMigration spec is all of what we need to accomplish all the use cases here.

My main concern with this proposal is that it may promote a wrong assumption that manual cluster balancing is preferred instead of relying on the scheduler/descheduler - while this is just a local minimum.

I think it's up to us to emphasize this assumption in the API documentation making absolutely clear that the nodeName field is optional and we recommend to keep it empty to let the scheduler find the best node (if trying to migrate to a specific named node is not strictly needed).

From my pov, we could get away without any API changes and without advertising this option at all - making it available for special cases and not a mainstream.

I'm proposing something like:

// NodeName is a request to try to migrate this VMI to a specific node.
// If it is non-empty, the migration controller simply try to configure the target VMI pod to be started onto that node,
// assuming that it fits resource, limits and other node placement constraints; it will override nodeSelector and affinity
// and anti-affinity rules set on the VM.
// If it is empty, recommended, the scheduler becomes responsible for finding the best Node to migrate the VMI to.
// +optional
NodeName string `json:"nodeName,omitempty"`

I'm adding it to this proposal.

@tiraboschi
Copy link
Member Author

tiraboschi commented Sep 9, 2024

Setting affinity and toleration is exactly what any other user would need to do to allow scheduling a workload on tainted node, not sure why we need to facilitate this in the migration case. Also, taking this route would not require us to add any new logic to the migration controller.
...
From my pov, we could get away without any API changes and without advertising this option at all - making it available for special cases and not a mainstream.

I'm sorry but now I'm a bit confused.
As for Kubevirt documentation,
in order to initiate a live migration I'm supposed to create VirtualMachineInstanceMigration (VMIM) object like:

apiVersion: kubevirt.io/v1
kind: VirtualMachineInstanceMigration
metadata:
  name: migration-job
spec:
  vmiName: vmi-fedora

or, more imperatively, executed something like:

$ virtctl migrate vmi-fedora

that under the hood is going to create a VirtualMachineInstanceMigration for me.

This proposal is now about extending it with the optional capability to try to live migrate to a named node.
So thi is proposing to allow the creation of something like:

apiVersion: kubevirt.io/v1
kind: VirtualMachineInstanceMigration
metadata:
  name: migration-job
spec:
  vmiName: vmi-fedora
  nodeName: my-new-target-node

or executing something like:

$ virtctl migrate vmi-fedora --nodeName=my-new-target-node

and this because one of the key point here is that the cluster admin is not supposed to be required to amend the spec of VMs owned by other users in order to try to migrate them to named nodes.

The migration controller will simply notice that nodeName on the VirtualMachineInstanceMigration is not empty and it will inject/replace (still under discussion, we have two alternatives here) something like:

spec:
  nodeName: <nodeName>

or

spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchFields:
              - key: metadata.name
                operator: In
                values:
                  - <nodeName>

on the target virt-launcher pod.

Can you please summarize what do you exactly mean with

Perhaps we can simply copy any provided Affinity and/or Tolerations set but the admin on the VMIM to the target pod - instead of offering a dedicated API field.

?

@vladikr
Copy link
Member

vladikr commented Sep 9, 2024

Setting affinity and toleration is exactly what any other user would need to do to allow scheduling a workload on tainted node, not sure why we need to facilitate this in the migration case. Also, taking this route would not require us to add any new logic to the migration controller.
...
From my pov, we could get away without any API changes and without advertising this option at all - making it available for special cases and not a mainstream.

I'm sorry but now I'm a bit confused.

Yes, apologies. I meant to say a dedicated API.
What I mean is that if we must support this behavior (which I'm not 100% convinced we should) then we can simply expose the already existing fields on the VMIM object,
such as .spec.affinity and .spec.tolerations`` The user will express his desire as it would be done on any other pod. Our migration controller will simply copy it to the target pod and merge with the existing ones from the VMI. As for [Kubevirt documentation](https://kubevirt.io/user-guide/compute/live_migration/), in order to initiate a live migration I'm supposed to create VirtualMachineInstanceMigration (VMIM`) object like:

apiVersion: kubevirt.io/v1
kind: VirtualMachineInstanceMigration
metadata:
  name: migration-job
spec:
  vmiName: vmi-fedora

or, more imperatively, executed something like:

$ virtctl migrate vmi-fedora

that under the hood is going to create a VirtualMachineInstanceMigration for me.

This proposal is now about extending it with the optional capability to try to live migrate to a named node. So thi is proposing to allow the creation of something like:

apiVersion: kubevirt.io/v1
kind: VirtualMachineInstanceMigration
metadata:
  name: migration-job
spec:
  vmiName: vmi-fedora
  nodeName: my-new-target-node

or executing something like:

$ virtctl migrate vmi-fedora --nodeName=my-new-target-node

and this because one of the key point here is that the cluster admin is not supposed to be required to amend the spec of VMs owned by other users in order to try to migrate them to named nodes.

The migration controller will simply notice that nodeName on the VirtualMachineInstanceMigration is not empty and it will inject/replace (still under discussion, we have two alternatives here) something like:

I think that by using .spec.affinity and .spec.tolerations the controller doesn't need to make any assumptions.
Also, nodeName will not try to migrate the workload to a tainted node.

spec:
  nodeName: <nodeName>

or

spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchFields:
              - key: metadata.name
                operator: In
                values:
                  - <nodeName>

on the target virt-launcher pod.

Can you please summarize what do you exactly mean with

Perhaps we can simply copy any provided Affinity and/or Tolerations set by the admin on the VMIM to the target pod - instead of offering a dedicated API field.

?

Yes. As I mentioned above, I would prefer to let the admin add .spec.affinity and/or .spec.toleration to the VMIM object, and the migration controller would merge these on the target pod.
This way, there wouldn't be a need for new logic in the controller and the admin won't need to make assumptions about the .spec.nodeName field.

@iholder101
Copy link
Contributor

@davidvossel

Here's what I'm most interested in now... Does anyone see how proceeding with this will cause any future harm or complexity to the project that impacts our ability to maintain KubeVirt?

Further to the risks @vladikr have mentioned, I think this proposal imposes a "philosophical" question: is it reasonable to add a feature that admins ask for although as developers we believe has no real use-cases and is the wrong way of doing things? Is our responsibility to simply please the users in this case, or try to educate them towards using the right mechanisms, even if they're used to doing things in a different way?

At the end of the day, here's the reality as I see it. Admins are asking for this feature... And they're asking for it over and over. For years. I tried to ignore it in the past, and guide people to our preferred way of doing this sort of thing, but the requests keep coming. I'm worn down.

What you write here is consistent with what @tiraboschi have added to the proposal:

The capability of live migrating a VM to a specific node is a pretty common and accepted
feature across traditional virtualization solutions ...

This text openly admits that the sole use-case here is that admins are used to this feature that's common on other platforms.

So, eventually, we need to decide if this reasoning is enough:

For years. I tried to ignore it in the past, and guide people to our preferred way of doing this sort of thing, but the requests keep coming. I'm worn down.

Is being worn down from user requests justifies adding a feature we believe is wrong?
Or should we instead insist to guide people to our preferred way of doing this sort of thing?

@tiraboschi
Copy link
Member Author

What you write here is consistent with what @tiraboschi have added to the proposal:

The capability of live migrating a VM to a specific node is a pretty common and accepted
feature across traditional virtualization solutions ...

This text openly admits that the sole use-case here is that admins are used to this feature that's common on other platforms.

No, I don't think that we can derive this.
All the goals and the use cases in the proposal are individually valid.
We could eventually state that different use cases expressed here can eventually find alternative solutions with the existing APIs.
The big advantage of this proposal is that a single API and approach can fit them all and this will lower the barrier to entry for new users coming from other solutions.

@EdDev
Copy link
Member

EdDev commented Nov 5, 2024

We will escalate this enhancement in the next maintainer/approver meeting and update.

Most of what I am going to summarize below has been raised already above, however, I think it is useful to see the whole picture in one place.

The topic has been discussed in the maintainer meeting on the 4th of November:

  1. Reasoning needs to be based on something that does not have other better alternatives.
  2. Adding the option to set a target node needs to be available only for Cluster Admins but not for Namespace Admins. This is partially an existing problem of who is able to trigger a migration, but the addition will expand the problem.
  3. The proposed selector needs to be re-reviewed by @vladikr . The explanation of why it was chosen is well explained at the moment. Most thought that the proposal is fine in this regard.

I think that the 1st point can be easily addressed and probably not a major blocker. I have not re-reviewed the latest changes done in the last few days.

The 2nd point can be solved at the implementation, but it does needs some research, checking if we can validate that the target node selector can only be valid for a cluster-admin.
At the minimum, I think it is a valid concern and if there is a simple solution, I would vote to handle it.

Hopefully the 3rd point will pass @vladikr .

@tiraboschi
Copy link
Member Author

  1. Adding the option to set a target node needs to be available only for Cluster Admins but not for Namespace Admins. This is partially an existing problem of who is able to trigger a migration, but the addition will expand the problem.

@EdDev, can you please explain why this proposal will expand the problem?
Currently a VM owner can already amend node affinity on the VM spec of a running VM and it will be eventually propagated down to the VMI object (if the cluster is configured with vmRolloutStrategy: "LiveUpdate"). At that point creating a VMIM object in that namespace will trigger a live migration to the target node that was indirectly selected acting on a different object.
But the result will be exactly the same with the same risks and the same concerns.

@EdDev
Copy link
Member

EdDev commented Nov 5, 2024

  1. Adding the option to set a target node needs to be available only for Cluster Admins but not for Namespace Admins. This is partially an existing problem of who is able to trigger a migration, but the addition will expand the problem.

@EdDev, can you please explain why this proposal will expand the problem? Currently a VM owner can already amend node affinity on the VM spec of a running VM and it will be eventually propagated down to the VMI object (if the cluster is configured with vmRolloutStrategy: "LiveUpdate"). At that point creating a VMIM object in that namespace will trigger a live migration to the target node that was indirectly selected acting on a different object. But the result will be exactly the same with the same risks and the same concerns.

I think @vladikr can explain this better.
While the same outcome can be reached by updating the VMs, with the new option it will be very easy to create an alternative migration controller that will bypass and prioritize a target node over other logic.

@vladikr
Copy link
Member

vladikr commented Nov 5, 2024

  1. Adding the option to set a target node needs to be available only for Cluster Admins but not for Namespace Admins. This is partially an existing problem of who is able to trigger a migration, but the addition will expand the problem.

@EdDev, can you please explain why this proposal will expand the problem? Currently a VM owner can already amend node affinity on the VM spec of a running VM and it will be eventually propagated down to the VMI object (if the cluster is configured with vmRolloutStrategy: "LiveUpdate"). At that point creating a VMIM object in that namespace will trigger a live migration to the target node that was indirectly selected acting on a different object. But the result will be exactly the same with the same risks and the same concerns.

The concern is mainly because of the interaction with the descheduler. If the migrations are forced to specific nodes, the descheduler may act and increase the migration queue.
For example, other products don't allow migrations to a name node when their version of the rescheduled is active.

@tiraboschi
Copy link
Member Author

The concern is mainly because of the interaction with the descheduler. If the migrations are forced to specific nodes, the descheduler may act and increase the migration queue. For example, other products don't allow migrations to a name node when their version of the rescheduled is active.

What about adding a configuration knob under spec.configuration.migrations on the KubeVirt CR to let the cluster admin configure the VMIM admission webook to reject migration request to named nodes in the eventually he configured a descheduler or a load aware scheduler and what to rely only on that for his cluster?

Comment on lines 92 to 94
### Why to simply `NodeSelector map[string]string`
On Pod spec we also have `NodeSelector map[string]string` which is used in **AND** with `NodeAffinity` rules so from this point of view its a viable option.
On the other side `pod.spec.nodeSelector` is only matching labels and the predefined `kubernetes.io/hostname` [label is not guaranteed to be reliable](https://kubernetes.io/docs/reference/node/node-labels/#preset-labels).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at this closer, I think using NodeSelector is the better option.

The issues I have with NodeSelectorTerm mostly have to do with how awkward it is to expose this tunable outside of the Affinity struct. The NodeSelectorTerm isn't really it's own concept exactly. It's one part of the pod Affinity and it's also a part of Affinity that has different meanings depending if it's in the preferred or required during scheduling section of the NodeAffinity. I'd rather not have to expose to users all these details.

While NodeSelector is less expressive, i think it's the most straightforward for UX for users, and i think it satisfies the goals here.

@fabiand
Copy link
Member

fabiand commented Nov 5, 2024

available only for Cluster Admins but not for Namespace Admins.

IIUIC then we can solve this - independently of this work - by adjusting the default RBAC permissions, and remove create permissiong for VMIMs.

@vladikr
Copy link
Member

vladikr commented Nov 5, 2024

available only for Cluster Admins but not for Namespace Admins.

IIUIC then we can solve this - independently of this work - by adjusting the default RBAC permissions, and remove create permissiong for VMIMs.

Do namespace admins need an RBAC to create objects in their namespace?

@davidvossel
Copy link
Member

Is being worn down from user requests justifies adding a feature we believe is wrong?
Or should we instead insist to guide people to our preferred way of doing this sort of thing?

Repeated user requests like this provide us with a signal. I think it's important as project maintainers that we continually recalibrate ourselves to this signal. We create KubeVirt to be used. If we can reasonably improve the quality of life for our users, we should.

It's possible that I've been too slow and resistant to this recalibration based on user input in the past. If it looks like i'm being inconsistent here based previous historical conversations, it's because looking back I can see areas where I should have evolved quicker.

This is a tough balance and I don't know where the line is between the maintainers vision for a project and user demand. When in doubt, I think we need to start leaning closer to the users side when possible, even if the solutions are imperfect.


In general, here's where I currently stand on this review

  • I'd prefer we exposed vmim.Spec.NodeSelector as the mechanism for directed live migration
  • We should clearly document this is best effort. no guarantees VMI will successfully move or stay at desired target
  • We should implement a max queue internally that is 2x the max parallel migration count.

The max queue won't impact our controllers since they already backoff well before that, but will prevent users and third party controllers from flooding active migrations. It also reduces the impact that a potential migration storm could occur between two controllers attempting to move VMIs in conflicting ways (like the descheduler and a third party controller colliding in an unexpected way)

@vladikr
Copy link
Member

vladikr commented Nov 5, 2024

We spoke about this approach offline with @davidvossel and I'm +1 on the max queue compromise.
I also prefer the simple nodeSelector.

I think that going forward we should consider implementing priority for migrations so system migration tasks will be implemented faster than user triggered migrations.
This way we won't block upgrades and evictions.

@iholder101
Copy link
Contributor

Thank you @davidvossel for your response.
I'm still not sure I 100% agree with this approach, but I do understand your reasoning and line of thinking.

  • We should clearly document this is best effort. no guarantees VMI will successfully move or stay at desired target

I agree. I'd add that we also should inform the users that this approach is not recommended, and do whatever we can in order to encourage users to never use this feature.

I also strongly think that the use-cases presented here under User Stories are, to say it lightly, far from being convincing. IMO we should not present them to users and should never encourage them to use this approach over others.

@kubevirt-bot
Copy link

Pull requests that are marked with lgtm should receive a review
from an approver within 1 week.

After that period the bot marks them with the label needs-approver-review.

/label needs-approver-review

@kubevirt-bot kubevirt-bot added the needs-approver-review Indicates that a PR requires a review from an approver. label Nov 14, 2024
tiraboschi and others added 4 commits November 15, 2024 12:02
…specific nodes

Adding a design proposal to extend VirtualMachineInstanceMigration
object with an additional API to let a cluster admin
try to trigger a live migration of a VM injecting
on the fly an additional NodeSelectorTerm constraint.
The additional NodeSelectorTerm can only restrict the set
of Nodes that are valid target for the migration
(eventually down to a single host); in order to
achieve this,  all the `NodeSelectorRequirements`
defined on the additional `NodeSelectorTerm` set
on the VMIM object should be appended to
all the `NodeSelectorTerms` already defined on
the VM object.
All the affinity rules defined on the VM spec are still
going to be satisfied.

Signed-off-by: Simone Tiraboschi <[email protected]>
Added that this is for exceptions

Signed-off-by: Fabian Deutsch <[email protected]>
Signed-off-by: Simone Tiraboschi <[email protected]>
@kubevirt-bot kubevirt-bot removed the lgtm Indicates that a PR is ready to be merged. label Nov 15, 2024
@kubevirt-bot
Copy link

New changes are detected. LGTM label has been removed.

@tiraboschi
Copy link
Member Author

In general, here's where I currently stand on this review

I'd prefer we exposed vmim.Spec.NodeSelector as the mechanism for directed live migration
We should clearly document this is best effort. no guarantees VMI will successfully move or stay at desired target
We should implement a max queue internally that is 2x the max parallel migration count.

Reworked to take care of these comments.
Please review.

@xpivarc
Copy link
Member

xpivarc commented Nov 18, 2024

/sig compute

Comment on lines +28 to +29
> [!IMPORTANT]
> Directly selecting named nodes as destinations is not assumed to be a default tool for balancing workloads or all the use-cases above. It's instead just a convenient tool for exceptional situations and one-offs to ensure that and admin can quickly react to emergencies, and spikes.
Copy link
Member

@vladikr vladikr Nov 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we also going to document this in the user-guide for this feature?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we are.


Such a capability is expected from traditional virtualization solutions but, with certain limitations, is still pretty common across the most popular cloud providers (at least when using dedicated and not shared nodes).
- For instance on Amazon EC2 the user can already live-migrate a `Dedicated Instance` from a `Dedicated Host` to another `Dedicated Host` explicitly choosing it from the EC2 console, see: https://repost.aws/knowledge-center/migrate-dedicated-different-host
- also on Google Cloud Platform Compute Engine the user can easily and directly live-migrate a VM from a `sole-tenancy` node to another one via CLI or REST API, see: https://cloud.google.com/compute/docs/nodes/manually-live-migrate#gcloud
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is an entirely different offering from what KubeVirt in general does.
I don't know how relevant is this example. Here the user specifically knows what node was given to him.

Sole-tenancy lets you have exclusive access to a sole-tenant node, which is a physical Compute Engine server that is dedicated to hosting only your project's VMs

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you think that this use case is different from Dedicated Hosts on AWS EC2?
See for instance figure 1 on https://cloud.google.com/compute/docs/nodes/sole-tenant-nodes
Normally your VMs are going to be executed on a multi-tenant host shared with other customers.
If you have special requirements in terms of physical isolation you can decide to pay more and have a set of physical hosts that are exclusively dedicated to you, in that case you know the names of the hosts and you can also decide (for various reasons) to manually live migrate a VM to a named node within your sole-tenancy node group.

Now if you are the cluster admin of an on-premise (or not but with dedicated hosts) cluster with KubeVirt, as for this proposal, you will be able to live migrate a VM to a named node.
The assumption is still that you are the cluster admin and you know the name of the hosts in your cluster.
Why is it different?

@vladikr
Copy link
Member

vladikr commented Nov 19, 2024

In general, here's where I currently stand on this review
I'd prefer we exposed vmim.Spec.NodeSelector as the mechanism for directed live migration
We should clearly document this is best effort. no guarantees VMI will successfully move or stay at desired target
We should implement a max queue internally that is 2x the max parallel migration count.

Reworked to take care of these comments. Please review.

Thank you.
I'm fine with the update approach.
I'm not sure if we should mention the gcloud or harvester as gcloud speaks about an entirely different scenario.

I would a sentence somewhere (maybe documentation is enough), something along the lines of: `With this feature, the admin can migrate virtual machines to specific nodes, but the Kubernetes descheduler may move those virtual machines to other nodes to optimize cluster resources.

/approve

@kubevirt-bot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: vladikr

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@kubevirt-bot kubevirt-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Nov 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. needs-approver-review Indicates that a PR requires a review from an approver. sig/compute size/L
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants