Bookmark this page

Lab: Advanced Virtual Machine Management

Create a VM snapshot to deploy a new VM, create a VM by cloning an existing PVC, perform live migrations, and initiate node maintenance.

Outcomes

  • Clone a VM.

  • Take a snapshot of a VM.

  • Restore a snapshot as a new PVC and then use that new PVC as the root disk for a new VM.

  • Live migrate a VM.

  • Put a cluster node into maintenance mode.

As the student user on the workstation machine, use the lab command to prepare your environment for this exercise, and to ensure that all required resources are available.

[student@workstation ~]$ lab start advanced-review

Instructions

  1. Open a command-line window and use the oc command to log in to your Red Hat OpenShift cluster as the admin user with redhatocp as the password. The OpenShift cluster API endpoint is https://api.ocp4.example.com:6443.

    Open a web browser and log in to the OpenShift web console at https://console-openshift-console.apps.ocp4.example.com

    Confirm that the golden-rhel VM is running under the advanced-review project.

    1. Open a command-line window and log in to the OpenShift cluster as the admin user with redhatocp as the password.

      [student@workstation ~]$ oc login -u admin -p redhatocp \
        https://api.ocp4.example.com:6443
      Login Successful
      ...output omitted...
    2. Identify the URL for the OpenShift web console.

      [student@workstation ~]$ oc whoami --show-console
        https://console-openshift-console.apps.ocp4.example.com
    3. Open a web browser and navigate to https://console-openshift-console.apps.ocp4.example.com. Click htpasswd_provider and log in as the admin user with redhatocp as the password.

    4. Navigate to VirtualizationVirtualMachines. Select the advanced-review project from the Project list. Confirm that the golden-rhel VM is running.

  2. Clone the golden-rhel VM to a new www1 VM. Start the www1 VM.

    If you need to access the VM's console, then log in as the root user with redhat as the password.

    1. The cloning process requires that you stop the source machine.

      Select the golden-rhel VM and then click ActionsStop. Wait for the machine to stop.

    2. Click ActionsClone. Set the VM name to www1, select Start VirtualMachine on clone, and then click Clone.

    3. Navigate to VirtualizationVirtualMachines and wait about 3-5 minutes for the machine to start. Confirm that the new www1 VM is running.

  3. Take a live snapshot of the golden-rhel VM. The name of the snapshot must be golden-snap1.

    1. Click the vertical ellipsis icon at the right of the golden-rhel VM and then click Start. Wait for the machine to start.

    2. Select the golden-rhel VM, navigate to the Snapshots tab, and then click Take snapshot.

      Enter the golden-snap1 value in the Name field. Click Save. Wait for the snapshot to report the status as Succeeded.

  4. Your snapshot from the preceding step creates a volume snapshot for each VM disk. Restore the volume snapshot of the golden-rhel PVC and then use that new PVC as the root disk for a new VM.

    Use the following information when restoring the volume snapshot as a new PVC:

    FieldValue
    Name root-copy
    StorageClass ocs-external-storagecluster-ceph-rbd-virtualization
    Access mode Shared access (RWX)

    The lab command prepared the /home/student/DO316/labs/advanced-review/www2.yaml file, which declares a VirtualMachine resource. Edit the file and set the root-copy PVC as the root disk. Create the www2 VM from the resource file.

    1. Navigate to StorageVolumeSnapshots. Click the vertical ellipsis icon at the right of the snapshot where the source is the golden-rhel PVC, and then click Restore as new PVC.

      Complete the form according to the instructions. Click Restore when done.

    2. From the terminal on the workstation machine, edit the ~/DO316/labs/advanced-review/www2.yaml file to specify the PVC to use as the root disk:

      ...output omitted...
            volumes:
              - name: rootdisk
                persistentVolumeClaim:
                  claimName: root-copy
    3. Use the oc create command to create the resource.

      [student@workstation ~]$ oc create -f ~/DO316/labs/advanced-review/www2.yaml
      virtualmachine.kubevirt.io/www2 created

      If using the OpenShift web console, navigate to VirtualizationVirtualMachines, and then click CreateWith YAML. Copy and paste the /home/student/DO316/labs/advanced-review/vm2.yaml file contents into the editor. Set the claimName parameter to root-copy and then click Create.

    4. Use the oc get command to confirm that the www2 VM is running.

      [student@workstation ~]$ oc get vmi -n advanced-review
      NAME          AGE     PHASE     IP           NODENAME   READY
      golden-rhel   20m     Running   10.11.0.39   worker01   True
      www1          20m     Running   10.8.2.100   worker02   True
      www2          2m49s   Running   10.8.2.102   worker02   True
  5. Attempt a live migration of the golden-rhel VM. The live migration fails because of a misconfigured disk. Identify the disk and then delete it. (This misconfigured disk contains only temporary data that you can safely discard.)

    1. Navigate to VirtualizationVirtualMachines. Click the vertical ellipsis icon at the right of the golden-rhel VM and then confirm that Migrate is disabled.

      The Status and Conditions columns show the Not migratable and LiveMigratable=False values for the golden-rhel VM.

    2. Select the golden-rhel VM, and navigate to the Diagnostics tab. The error message for the LiveMigratable type indicates that the tempdata PVC is not using the ReadWriteMany access mode.

    3. Click ActionsStop. Wait for the machine to stop.

    4. Navigate to the Configuration tab. In the Disks section, click the vertical ellipsis icon at the right of the tempdata disk and then click Detach. Click Detach to confirm the operation.

    5. Click ActionsStart to restart the VM. Wait for the VM to start.

    6. Perform a live migration of the VM. Click ActionsMigrate. Wait for the migration to complete.

  6. Put the worker02 cluster node into maintenance mode by using a NodeMaintenance custom resource, and move its workload to the remaining nodes.

    The lab command has prepared the /home/student/DO316/labs/advanced-review/nm.yaml resource file, which you can use as a model. To use that file, you must adapt it to the requirements.

    1. From the terminal on the workstation machine, edit the ~/DO316/labs/advanced-review/nm.yaml file:

      apiVersion: nodemaintenance.kubevirt.io/v1beta1
      kind: NodeMaintenance
      metadata:
        name: node-maintenance
      spec:
        nodeName: worker02
        reason: "Node maintenance"
    2. Use the oc create command to create the resource.

      [student@workstation ~]$ oc create -f ~/DO316/labs/advanced-review/nm.yaml
      nodemaintenance.nodemaintenance.medik8s.io/node-maintenance created
    3. Confirm that the node has the SchedulingDisabled status.

      [student@workstation ~]$ oc get nodes
      NAME       STATUS                     ROLES                         ...
      master01   Ready                      control-plane,master,worker   ...
      master02   Ready                      control-plane,master,worker   ...
      master03   Ready                      control-plane,master,worker   ...
      worker01   Ready                      worker                        ...
      worker02   Ready,SchedulingDisabled   worker                        ...

Evaluation

As the student user on the workstation machine, use the lab command to grade your work. Correct any reported failures and rerun the command until successful.

[student@workstation ~]$ lab grade advanced-review

Finish

As the student user on the workstation machine, use the lab command to complete this exercise. This step is important to ensure that resources from previous exercises do not impact upcoming exercises.

[student@workstation ~]$ lab finish advanced-review

Revision: do316-4.14-d8a6b80