Clone a virtual machine and then configure the clones to provide a highly available web application.
Outcomes
Clone virtual machines.
Take a snapshot of a virtual machine.
Configure readiness probes.
Attach PVCs as disks to virtual machines.
Create a service of the ClusterIP type.
Link a service to virtual machines.
Create a route.
If you did not reset your workstation and server machines at the end of the last chapter, then save any work that you want to keep from earlier exercises on those machines and reset them now.
As the student user on the workstation machine, use the lab command to prepare your system for this exercise.
This command ensures that the cluster API is reachable.
The command also creates the review-cr3 namespace and starts the golden-web virtual machine in that namespace.
[student@workstation ~]$ lab start review-cr3
Specifications
The Red Hat OpenShift cluster API endpoint is https://api.ocp4.example.com:6443 and the OpenShift web console is available at https://console-openshift-console.apps.ocp4.example.com.
Use the admin user and redhatocp as the password to log in to your OpenShift cluster.
The oc command is available on the workstation machine.
All the resources you create must belong to the review-cr3 namespace.
Clone the golden-web VM.
Use web1 for the name of the clone.
Clone the golden-web VM a second time.
Use web2 for the name of the clone.
Create a snapshot of the web1 VM.
Use web1-snap1 for the name of the snapshot.
The web application that is running inside the VMs exposes the /cgi-bin/health endpoint on port 80.
Add a readiness probe to the web1 and web2 VMs.
The probe must monitor the endpoint every five seconds and must fail after two unsuccessful attempts.
The lab command prepares the /home/student/DO316/labs/review-cr3/readiness.yaml file that you can use as an example.
The lab command prepares the web1-documentroot and web2-documentroot PVCs.
They contain web content for the web1 and web2 VMs.
Attach the web1-documentroot PVC to the web1 VM and attach the web2-documentroot PVC to the web2 VM.
Use the virtio interface when attaching the disks.
Access the console of the web1 VM.
Log in as the root user with redhat as the password.
Run the /root/mount.sh script that the lab command prepares.
The script mounts the additional disk in the /var/www/html directory.
Perform the same operation for the web2 VM.
Add the tier: front label to the web1 and web2 VM resources.
Red Hat OpenShift Virtualization must automatically assign this label to the virtual machine instance (VMI) resources that it creates when you start the VMs.
Create a service named front with the ClusterIP type.
The service must dispatch the web requests between the VMs with the tier: front label.
These VMs host a web service that listens on port TCP 80.
The lab command prepares the /home/student/DO316/labs/review-cr3/service.yaml file that you can use as an example.
Create a route named front that sends the requests from external clients to the front service.
Ensure that the URL for the route is http://front-review-cr3.apps.ocp4.example.com.
You can test the route and the service by using the curl http://front-review-cr3.apps.ocp4.example.com command from the command line on the workstation machine.
You can confirm that the service correctly dispatches the requests between the two VMs by running the curl command several times.
The web1 and web2 VMs serve different web content.
Ensure that the web1 and web2 VMs are running before grading your work.
Open a web browser on the workstation machine and navigate to the OpenShift web console at https://console-openshift-console.apps.ocp4.example.com.
Log in to your OpenShift cluster as the admin user with redhatocp as the password.
Open a web browser and navigate to https://console-openshift-console.apps.ocp4.example.com.
Select and log in as the admin user with redhatocp as the password.
Clone the golden-web VM to a new VM named web1.
The cloning process requires that you stop the source machine.
Navigate to → , select the golden-web VM, and then click → to stop the VM.
Wait for the machine to stop.
Click → .
Set the VM name to web1, select , and then click .
Navigate to → and confirm that the new web1 VM is running.
Repeat this step to create a clone named web2.
Take a snapshot of the web1 VM.
The name of the snapshot must be web1-snap1.
Select the web1 VM, navigate to the tab, and then click .
Enter web1-snap1 in the field and then click .
Wait for the snapshot to report the Succeeded status.
The web servers that are running inside the web1 and web2 VMs expose the /cgi-bin/health endpoint that you can use to monitor the application status.
Add a readiness probe to the web1 and web2 VMs that uses HTTP GET requests to test the /cgi-bin/health endpoint.
Navigate to → , select the web1 VM, and then navigate to the tab.
Use the YAML editor to declare the probe and then click .
You can copy and paste the readinessProbe section from the /home/student/DO316/labs/review-cr3/readiness.yaml file that the lab command prepares.
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: ...output omitted... spec: ...output omitted... template: metadata: ...output omitted... spec: domain: ...output omitted...readinessProbe:httpGet:path: /cgi-bin/healthport: 80initialDelaySeconds: 60periodSeconds: 5timeoutSeconds: 2failureThreshold: 2successThreshold: 1evictionStrategy: LiveMigrate hostname: web1 ...output omitted...
Restart the web1 VM to apply the changes.
Click → and wait for the VM to reach the Running status before proceeding.
Repeat this step to add the readiness probe to the web2 VM.
Attach the web1-documentroot PVC as a disk to the web1 VM by using the virtio interface.
Repeat this step to attach the web2-documentroot PVC as a disk to the web2 VM.
Use the OpenShift web console to stop the web1 VM.
Navigate to → , select the web1 VM, and then click → .
Wait for the machine to stop.
Navigate to the tab and select . Click . Complete the form by using the following information:
If the field is set to scsi and virtio is not available, then click and start over.
The web interface takes a few seconds to detect that the VM is stopped.
| Field | Value |
|---|---|
Use an existing PVC
| |
web1-documentroot
| |
Disk
| |
virtio
|
Click to attach the disk to the VM.
Click → to start the VM.
Repeat this step to attach the web2-documentroot PVC to the web2 VM.
Log in to the web1 VM console as the root user with redhatocp as the password.
Then, run the /root/mount.sh script.
Repeat this step for the web2 VM.
Navigate to → , select the web1 VM, and then navigate to the tab.
Log in to the VM console as the root user with redhat as the password.
Run the /root/mount.sh command.
[root@golden-web ~]# /root/mount.sh
...output omitted...
Mount successfulLog out of the VM console.
[root@golden-web ~]# logoutRepeat this step to mount the disk on the web2 VM.
Add the tier: front label to the virt-launcher pods for the web1 amd web2 VMs.
Navigate to → , select the web1 VM, and then navigate to the tab.
In the YAML editor, add the tier: front label in the .spec.template.metadata.labels path.
...output omitted...
spec:
dataVolumeTemplates:
...output omitted...
template:
metadata:
creationTimestamp: null
labels:
tier: front
flavor.template.kubevirt.io/small: "true"
kubevirt.io/domain: golden-web
kubevirt.io/size: small
...output omitted...Restart the web1 VM to apply the changes.
Click → and wait for the VM to reach the Running status before proceeding.
Repeat this step to add the same label to the web2 VM.
Create a service named front with the ClusterIP type.
Use the tier: front label for the selector.
The service must listen on TCP port 80 and forward the traffic to the VM on port 80.
From the OpenShift web console, navigate to → .
Click and then use the YAML editor to create the front service with the following content:
apiVersion: v1
kind: Service
metadata:
name: front
namespace: review-cr3
spec:
type: ClusterIP
selector:
tier: front
ports:
- protocol: TCP
port: 80
targetPort: 80The lab command prepares the /home/student/DO316/labs/review-cr3/service.yaml file as an example.
Click to create the service.
The tab for the front service is displayed.
Confirm that the front service has two active endpoints.
Click the tab and confirm that list includes the virt-launcher pods for the web1 and web2 VMs.
Create a route to access the web application that is running inside the VMs by using the http://front-review-cr3.apps.ocp4.example.com URL.
From the web console, navigate to → . Click and complete the form by using the following information:
| Field | Value |
|---|---|
front
| |
Click .
From a command-line window on the workstation machine, use the curl command to confirm that you can access the web application from outside the cluster.
[student@workstation ~]$ curl http://front-review-cr3.apps.ocp4.example.com
<!doctype html>
<html lang="en">
<head>
<title>Test page web2</title>
</head>
<body>
<h1>Test Page web2</h1>
Welcome to web2.
</body>
</html>Rerun the curl command several times to confirm that the service dispatches the requests between the two VMs.
[student@workstation ~]$curl http://front-review-cr3.apps.ocp4.example.com/?[1-3]--curl--front-review-cr3.apps.ocp4.example.com/?1 <!doctype html> <html lang="en"> <head> <title>Test page web2</title> </head> <body> <h1>Test Page web2</h1> Welcome to web2. </body> </html> [2/3]: front-review-cr3.apps.ocp4.example.com/?2 --> <stdout> --curl--front-review-cr3.apps.ocp4.example.com/?2 <!doctype html> <html lang="en"> <head> <title>Test page web1</title> </head> <body> <h1>Test Page web1</h1> Welcome to web1. </body> </html> [3/3]: front-review-cr3.apps.ocp4.example.com/?3 --> <stdout> --curl--front-review-cr3.apps.ocp4.example.com/?3 <!doctype html> <html lang="en"> <head> <title>Test page web2</title> </head> <body> <h1>Test Page web2</h1> Welcome to web2. </body> </html>