Create a VM snapshot to deploy a new VM, create a VM by cloning an existing PVC, perform live migrations, and initiate node maintenance.
Outcomes
Clone a VM.
Take a snapshot of a VM.
Restore a snapshot as a new PVC and then use that new PVC as the root disk for a new VM.
Live migrate a VM.
Put a cluster node into maintenance mode.
As the student user on the workstation machine, use the lab command to prepare your environment for this exercise, and to ensure that all required resources are available.
[student@workstation ~]$ lab start advanced-review
Instructions
Open a command-line window and use the oc command to log in to your Red Hat OpenShift cluster as the admin user with redhatocp as the password.
The OpenShift cluster API endpoint is https://api.ocp4.example.com:6443.
Open a web browser and log in to the OpenShift web console at https://console-openshift-console.apps.ocp4.example.com
Confirm that the golden-rhel VM is running under the advanced-review project.
Open a command-line window and log in to the OpenShift cluster as the admin user with redhatocp as the password.
[student@workstation ~]$oc login -u admin -p redhatocp \https://api.ocp4.example.com:6443Login Successful ...output omitted...
Identify the URL for the OpenShift web console.
[student@workstation ~]$ oc whoami --show-console
https://console-openshift-console.apps.ocp4.example.comOpen a web browser and navigate to https://console-openshift-console.apps.ocp4.example.com.
Click and log in as the admin user with redhatocp as the password.
Navigate to → .
Select the advanced-review project from the list.
Confirm that the golden-rhel VM is running.
Clone the golden-rhel VM to a new www1 VM.
Start the www1 VM.
If you need to access the VM's console, then log in as the root user with redhat as the password.
The cloning process requires that you stop the source machine.
Select the golden-rhel VM and then click → .
Wait for the machine to stop.
Click → .
Set the VM name to www1, select , and then click .
Navigate to → and wait about 3-5 minutes for the machine to start.
Confirm that the new www1 VM is running.
Take a live snapshot of the golden-rhel VM.
The name of the snapshot must be golden-snap1.
Your snapshot from the preceding step creates a volume snapshot for each VM disk.
Restore the volume snapshot of the golden-rhel PVC and then use that new PVC as the root disk for a new VM.
Use the following information when restoring the volume snapshot as a new PVC:
| Field | Value |
|---|---|
root-copy
| |
ocs-external-storagecluster-ceph-rbd-virtualization
| |
Shared access (RWX)
|
The lab command prepared the /home/student/DO316/labs/advanced-review/www2.yaml file, which declares a VirtualMachine resource.
Edit the file and set the root-copy PVC as the root disk.
Create the www2 VM from the resource file.
Navigate to → . Click the vertical ellipsis icon at the right of the snapshot where the source is the PVC, and then click .
Complete the form according to the instructions. Click when done.
From the terminal on the workstation machine, edit the ~/DO316/labs/advanced-review/www2.yaml file to specify the PVC to use as the root disk:
...output omitted...
volumes:
- name: rootdisk
persistentVolumeClaim:
claimName: root-copyUse the oc create command to create the resource.
[student@workstation ~]$ oc create -f ~/DO316/labs/advanced-review/www2.yaml
virtualmachine.kubevirt.io/www2 createdIf using the OpenShift web console, navigate to → , and then click → .
Copy and paste the /home/student/DO316/labs/advanced-review/vm2.yaml file contents into the editor.
Set the claimName parameter to root-copy and then click .
Use the oc get command to confirm that the www2 VM is running.
[student@workstation ~]$oc get vmi -n advanced-reviewNAME AGE PHASE IP NODENAME READY golden-rhel 20m Running 10.11.0.39 worker01 True www1 20m Running 10.8.2.100 worker02 Truewww22m49sRunning10.8.2.102 worker02 True
Attempt a live migration of the golden-rhel VM.
The live migration fails because of a misconfigured disk.
Identify the disk and then delete it.
(This misconfigured disk contains only temporary data that you can safely discard.)
Navigate to → .
Click the vertical ellipsis icon at the right of the golden-rhel VM and then confirm that is disabled.
The and columns show the and values for the golden-rhel VM.
Select the golden-rhel VM, and navigate to the tab.
The error message for the type indicates that the tempdata PVC is not using the ReadWriteMany access mode.
Click → . Wait for the machine to stop.
Navigate to the tab.
In the section, click the vertical ellipsis icon at the right of the tempdata disk and then click .
Click to confirm the operation.
Click → to restart the VM. Wait for the VM to start.
Perform a live migration of the VM. Click → . Wait for the migration to complete.
Put the worker02 cluster node into maintenance mode by using a NodeMaintenance custom resource, and move its workload to the remaining nodes.
The lab command has prepared the /home/student/DO316/labs/advanced-review/nm.yaml resource file, which you can use as a model.
To use that file, you must adapt it to the requirements.
From the terminal on the workstation machine, edit the ~/DO316/labs/advanced-review/nm.yaml file:
apiVersion: nodemaintenance.kubevirt.io/v1beta1
kind: NodeMaintenance
metadata:
name: node-maintenance
spec:
nodeName: worker02
reason: "Node maintenance"Use the oc create command to create the resource.
[student@workstation ~]$ oc create -f ~/DO316/labs/advanced-review/nm.yaml
nodemaintenance.nodemaintenance.medik8s.io/node-maintenance createdConfirm that the node has the SchedulingDisabled status.
[student@workstation ~]$oc get nodesNAME STATUS ROLES ... master01 Ready control-plane,master,worker ... master02 Ready control-plane,master,worker ... master03 Ready control-plane,master,worker ... worker01 Ready worker ... worker02 Ready,SchedulingDisabledworker ...