Bookmark this page

Performing Virtual Machine Live Migrations

Objectives

  • Describe and manage VM live migrations.

Live Migrations

During maintenance or troubleshooting, you might need to migrate a VM to a different cluster node. Live migration is the process of moving a running VMI to a different cluster node without disrupting access or the virtual workload.

To perform a live migration, the VM must meet the following conditions:

  • The underlying PVC must use ReadWriteMany (RWX) access mode.

  • The pod network is not configured with the bridge binding type.

  • Ports 49152 and 49153 must be available in the VM's virt-launcher pod; live migration fails if these ports are specified in a masquerade network interface.

During installation, the Red Hat OpenShift Virtualization operator creates a storage profile resource for each storage class.

[user@host ~]$ oc get storageprofiles.cdi.kubevirt.io
NAME                                                  AGE
nfs-storage                                           4m1s
ocs-external-storagecluster-ceph-rbd                  4m1s
ocs-external-storagecluster-ceph-rbd-virtualization   4m1s
ocs-external-storagecluster-ceph-rgw                  4m1s
ocs-external-storagecluster-cephfs                    4m
openshift-storage.noobaa.io                           4m

You can modify the storage profile resource to configure the default access and volume modes for a particular storage class.

You can set the RWX access mode for a storage class by using the oc edit command:

[user@host ~]$ oc edit storageprofile nfs-storage -n openshift-cnv
...output omitted...
Spec:
  Claim Property Sets:
    Access Modes:
      ReadWriteMany
    Volume Mode:  Filesystem
...output omitted...

Note

Modifications to the storage profiles affect new PVCs and do not impact existing PVCs. You cannot modify the access mode of an existing PVC.

Limitations of Live Migrations

You can adjust the default settings for live migration limits and timeouts by editing the HyperConverged CR. Use the oc edit hco -n openshift-cnv kubevirt-hyperconverged command to modify the default parameters. To restore the default value for any spec.liveMigrationConfig parameter, delete the key-value pair, and then save the file.

KeyDescriptionDefault value
parallelMigrationsPerCluster The maximum number of migrations that can run in parallel in the cluster.5
parallelOutboundMigrationsPerNode The maximum number of parallel outbound migrations per node.2
bandwidthPerMigration Limits the bandwidth, in MiB/s, that is used for each migration.64
completionTimeoutPerGiB If the migration exceeds the defined time, in GiB/s of memory, then the migration is canceled. The size of the migrating disks is included in the calculation if you use the BlockMigration migration method.800
progressTimeout The migration is canceled if the memory copy fails to make progress in this time (in seconds).150

Virtualization Node Placement

You can configure node placement rules to ensure that VMs run on the most appropriate nodes. As an administrator, you might want particular VMs to run on the same nodes, on separate nodes, or on a specific node to ensure that the VMs have access to the features that they need from the underlying hardware.

Node placement rules are declared in the spec field of the VirtualMachine YAML manifest by using the rule types as follows:

nodeSelector

VMs are scheduled on nodes that are labeled with the key-value pairs as specified in this field. The node's labels must exactly match all of the listed key-value pairs.

For example, if you want a VM to run on a node that includes the example-key-1=example-value-1 and example-key-2=example-value-2 labels, then use the following metadata in your VM manifest:

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: example-vm-node-selector
spec:
  template:
    spec:
      nodeSelector:
        example-key-1: example-value-1
        example-key-2: example-value-2
...output omitted...

In the event that the node does not meet all requirements, the VM is not scheduled.

affinity

Use node affinity rules to schedule a VM to run on a group of nodes. You can also specify anti-affinity rules to prevent a VM from running on a particular group of nodes. To allow more flexibility during scheduling, you can specify whether the affinity rule is required for the VM to run, or is preferred but not required.

Note

Affinity rules apply only during scheduling; running VMs are not rescheduled even if the constraints are no longer met.

In the following example, the affinity rule specifies that the VM must run on a node with the example.io/example-key=example-value-1 label or with the example.io/example-key=example-value-2 label. The preferredDuringSchedulingIgnoredDuringExecution specification tells the scheduler to prioritize nodes with the example-node-label-key=example-node-label-value label, although it can ignore the constraint if none of the nodes have the example-node-label-key=example-node-label-value label.

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: example-vm-node-affinity
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: example.io/example-key
            operator: In
            values:
            - example-value-1
            - example-value-2
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 1
        preference:
          matchExpressions:
          - key: example-node-label-key
            operator: In
            values:
            - example-node-label-value
...output omitted...
tolerations

A toleration specifies that a VM can be scheduled on a node if the VM's taint matches the taints on the node. However, if a VM is configured to tolerate a taint, then it is not required to schedule the VM onto a node that is configured with that taint.

In this example, the nodes that are reserved for VM workloads are labeled with the key=virtualization:NoSchedule taint. The VM is configured with a matching toleration and thus can run on the nodes with that taint.

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: example-vm-tolerations
spec:
  tolerations:
  - key: "key"
    operator: "Equal"
    value: "virtualization"
    effect: "NoSchedule"
...output omitted...

Starting, Monitoring, and Canceling Live Migrations

As a cluster administrator, you can also manually migrate virtual machine instances (VMIs).

To start a live migration by using the Red Hat OpenShift web console, navigate to VirtualizationVirtualMachines. Select your project from the Project list. Locate the row with the VM to migrate, and then click the vertical ellipsis icon at the right of the row. Click Migrate. Wait a few moments for the migration to complete.

You can also click the VM's name to navigate to the VM's Overview tab. Click ActionsMigrate. Wait a few moments for the migration to complete.

To migrate from the command line, create a VirtualMachineInstanceMigration manifest for the VMI as shown in the following example:

apiVersion: kubevirt.io/v1
kind: VirtualMachineInstanceMigration
metadata:
  name: migration-job
spec:
  vmiName: VMI name

Use the oc create -f manifest name.yaml -n web-servers command to initiate the live migration of the VMI that is specified in the manifest. You can view the status of a live migration on the web console by navigating to the Overview tab, in the Details section of the VM.

You can use the command line to monitor the status of a VM's live migration with the oc describe vmi vmi-name command or with the oc get vmim command.

To cancel a live migration in the web console, navigate to VirtualizationVirtualMachines. Select your project from the Project list. Click the vertical ellipsis icon at the right of the VM to migrate, and then click Cancel Migration.

To cancel a live migration from the command line, you can delete the VirtualMachineInstanceMigration object that is associated with the migration by using the oc delete vmim migration-job command.

References

For more information, refer to the Live Migration chapter in the Red Hat OpenShift Container Platform 4.14 documentation at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/virtualization/index#live-migration

Revision: do316-4.14-d8a6b80