Create a virtual machine template, use that template to deploy a virtual machine, and put a cluster node into maintenance mode.
Outcomes
Create a virtual machine template.
Deploy virtual machines from a template.
Manage user access rights.
Put a cluster node into maintenance mode.
If you did not reset your workstation and server machines at the end of the last chapter, then save any work that you want to keep from earlier exercises on those machines and reset them now.
As the student user on the workstation machine, use the lab command to prepare your system for this exercise.
This command ensures that the cluster API is reachable and creates the review-cr2 namespace.
[student@workstation ~]$ lab start review-cr2
Specifications
The Red Hat OpenShift cluster API endpoint is https://api.ocp4.example.com:6443 and the OpenShift web console is available at https://console-openshift-console.apps.ocp4.example.com.
Use the admin user with redhatocp as the password to log in to your OpenShift cluster.
The oc command is available on the workstation machine.
During this exercise, you also use the developer user account with developer as the password.
Create a virtual machine template in the review-cr2 project with the following properties:
| Parameter | Value |
|---|---|
| Project |
review-cr2
|
| Template name and display name |
dev-web-rhel8
|
| Template provider |
Red Hat Training
|
| Operating system |
Red Hat Enterprise Linux 8.0 or higher
|
| Boot disk URL | http://utility.lab.example.com:8080/openshift4/images/helloworld.qcow2 |
| Size |
Tiny
|
| Workload type |
Server
|
| Root disk size | 10 GiB |
| Root disk interface |
virtio
|
| Root disk storage class |
ocs-external-storagecluster-ceph-rbd-virtualization
|
Then, modify the template to use the template variable, ${NAME}, to configure the name of the rootdisk data volume to the name of the VM that the template creates.
The lab command prepares the /home/student/DO316/labs/review-cr2/template_parameters.txt file that lists these parameters.
You can copy and paste the parameters from the file into the OpenShift web console.
Grant admin access to the developer user for the review-cr2 project.
As the developer user, create a VM named web1 in the review-cr2 namespace.
Use the dev-web-rhel8 template.
Put the worker02 cluster node into maintenance mode and move its workload to the remaining nodes.
Ensure that the web1 VM is running before grading your work.
From a command line on the workstation machine, use the oc command to log in to your OpenShift cluster as the admin user with redhatocp as the password.
Open a web browser and log in to the OpenShift web console at https://console-openshift-console.apps.ocp4.example.com.
From a command line, log in to your OpenShift cluster as the admin user.
[student@workstation ~]$oc login -u admin -p redhatocp \https://api.ocp4.example.com:6443Login Successful ...output omitted...
Open a web browser and navigate to https://console-openshift-console.apps.ocp4.example.com.
Select and log in as the admin user with redhatocp as the password.
Create the dev-web-rhel8 VM template in the review-cr2 project.
Navigate to → and ensure that All Projects is selected in the Project list.
Enter rhel8-server in the Search by Name field.
Click the vertical ellipsis icon next to the rhel8-server-tiny line, and then click .
Complete the form by using the following information and then click .
| Field | Value |
|---|---|
dev-web-rhel8
| |
review-cr2
| |
dev-web-rhel8
| |
Red Hat Training
|

From the dev-web-rhel8
page, navigate to the tab.
Click the vertical ellipsis icon next to the rootdisk line, and then click .
Click to confirm.
Click . Complete the form by using the following information and then click .
| Field | Value |
|---|---|
| enabled | |
rootdisk
| |
URL (creates PVC)
| |
| http://utility.lab.example.com:8080/openshift4/images/helloworld.qcow2 | |
| 10 GiB | |
virtio
| |
ocs-external-storagecluster-ceph-rbd-virtualization
|
Use the YAML editor to set the name of the rootdisk data volume to the name of the VM that the template creates.
Navigate to the tab and locate the spec.dataVolumeTemplates.metadata.name and spec.template.volumes.datavolume.name objects.
Update the values of the objects to use the ${NAME} template variable and then click .
apiVersion: template.openshift.io/v1 kind: Template ...output omitted... spec: dataVolumeTemplates: - metadata: name:'${NAME}'...output omitted... template: volumes: - name: rootdisk dataVolume: name:'${NAME}'...output omitted...
Grant admin access to the developer user for the review-cr2 project.
From a command-line window on the workstation machine, use the oc create rolebinding command to grant admin access to the developer user.
[student@workstation ~]$ oc create rolebinding admin --clusterrole=admin \
--user=developer -n review-cr2
rolebinding.rbac.authorization.k8s.io/admin createdConfirm that the developer user has admin rights.
[student@workstation ~]$oc get rolebindings -n review-cr2 -o wideNAME ROLE AGE USERS GROUPS SERVICEACCOUNTS adminClusterRole/admin39sdeveloper...output omitted...
As the developer user, use the dev-web-rhel8 template to create the web1 VM in the review-cr2 namespace.
From the OpenShift web console, click → .
Select and log in as the developer user with developer as the password.
Select the perspective.
Navigate to → and then set the project to review-cr2.
Click and then select the template.
Set the field to web1 and then click .
Wait for the web1 VM to have the Running status.
Set the worker02 cluster node in maintenance mode and drain its workload.
From the command-line window on the workstation machine, use the oc adm cordon command to mark the node as unschedulable.
[student@workstation ~]$ oc adm cordon worker02
node/worker02 cordonedConfirm that the node has the SchedulingDisabled status.
[student@workstation ~]$oc get nodesNAME STATUS ROLES AGE VERSION master01 Ready control-plane,master,worker 19d v1.27.10+28ed2d7 master02 Ready control-plane,master,worker 19d v1.27.10+28ed2d7 master03 Ready control-plane,master,worker 19d v1.27.10+28ed2d7 worker01 Ready worker 19d v1.27.10+28ed2d7 worker02 Ready,SchedulingDisabledworker 19d v1.27.10+28ed2d7
Run the oc adm drain command to evacuate all workloads from the node.
This command might take a few minutes to complete.
[student@workstation ~]$ oc adm drain worker02 \
--delete-emptydir-data --ignore-daemonsets
node/worker02 already cordoned
...output omitted...
node/worker02 drained