Bookmark this page

Lab: Clone a Virtual Machine and Configure Load Balancing

Clone a virtual machine and then configure the clones to provide a highly available web application.

Outcomes

  • Clone virtual machines.

  • Take a snapshot of a virtual machine.

  • Configure readiness probes.

  • Attach PVCs as disks to virtual machines.

  • Create a service of the ClusterIP type.

  • Link a service to virtual machines.

  • Create a route.

If you did not reset your workstation and server machines at the end of the last chapter, then save any work that you want to keep from earlier exercises on those machines and reset them now.

As the student user on the workstation machine, use the lab command to prepare your system for this exercise.

This command ensures that the cluster API is reachable. The command also creates the review-cr3 namespace and starts the golden-web virtual machine in that namespace.

[student@workstation ~]$ lab start review-cr3

Specifications

The Red Hat OpenShift cluster API endpoint is https://api.ocp4.example.com:6443 and the OpenShift web console is available at https://console-openshift-console.apps.ocp4.example.com. Use the admin user and redhatocp as the password to log in to your OpenShift cluster. The oc command is available on the workstation machine. All the resources you create must belong to the review-cr3 namespace.

  • Clone the golden-web VM. Use web1 for the name of the clone.

  • Clone the golden-web VM a second time. Use web2 for the name of the clone.

  • Create a snapshot of the web1 VM. Use web1-snap1 for the name of the snapshot.

  • The web application that is running inside the VMs exposes the /cgi-bin/health endpoint on port 80. Add a readiness probe to the web1 and web2 VMs. The probe must monitor the endpoint every five seconds and must fail after two unsuccessful attempts. The lab command prepares the /home/student/DO316/labs/review-cr3/readiness.yaml file that you can use as an example.

  • The lab command prepares the web1-documentroot and web2-documentroot PVCs. They contain web content for the web1 and web2 VMs. Attach the web1-documentroot PVC to the web1 VM and attach the web2-documentroot PVC to the web2 VM. Use the virtio interface when attaching the disks.

  • Access the console of the web1 VM. Log in as the root user with redhat as the password. Run the /root/mount.sh script that the lab command prepares. The script mounts the additional disk in the /var/www/html directory. Perform the same operation for the web2 VM.

  • Add the tier: front label to the web1 and web2 VM resources. Red Hat OpenShift Virtualization must automatically assign this label to the virtual machine instance (VMI) resources that it creates when you start the VMs.

  • Create a service named front with the ClusterIP type. The service must dispatch the web requests between the VMs with the tier: front label. These VMs host a web service that listens on port TCP 80. The lab command prepares the /home/student/DO316/labs/review-cr3/service.yaml file that you can use as an example.

  • Create a route named front that sends the requests from external clients to the front service. Ensure that the URL for the route is http://front-review-cr3.apps.ocp4.example.com.

    You can test the route and the service by using the curl http://front-review-cr3.apps.ocp4.example.com command from the command line on the workstation machine. You can confirm that the service correctly dispatches the requests between the two VMs by running the curl command several times. The web1 and web2 VMs serve different web content.

  • Ensure that the web1 and web2 VMs are running before grading your work.

  1. Open a web browser on the workstation machine and navigate to the OpenShift web console at https://console-openshift-console.apps.ocp4.example.com. Log in to your OpenShift cluster as the admin user with redhatocp as the password.

    1. Open a web browser and navigate to https://console-openshift-console.apps.ocp4.example.com. Select htpasswd_provider and log in as the admin user with redhatocp as the password.

  2. Clone the golden-web VM to a new VM named web1.

    1. The cloning process requires that you stop the source machine. Navigate to VirtualizationVirtualMachines, select the golden-web VM, and then click ActionsStop to stop the VM. Wait for the machine to stop.

    2. Click ActionsClone. Set the VM name to web1, select Start VirtualMachine on clone, and then click Clone.

    3. Navigate to VirtualizationVirtualMachines and confirm that the new web1 VM is running.

    4. Repeat this step to create a clone named web2.

  3. Take a snapshot of the web1 VM. The name of the snapshot must be web1-snap1.

    1. Select the web1 VM, navigate to the Snapshots tab, and then click Take Snapshot.

    2. Enter web1-snap1 in the Name field and then click Save.

    3. Wait for the snapshot to report the Succeeded status.

  4. The web servers that are running inside the web1 and web2 VMs expose the /cgi-bin/health endpoint that you can use to monitor the application status. Add a readiness probe to the web1 and web2 VMs that uses HTTP GET requests to test the /cgi-bin/health endpoint.

    1. Navigate to VirtualizationVirtualMachines, select the web1 VM, and then navigate to the YAML tab. Use the YAML editor to declare the probe and then click Save.

      You can copy and paste the readinessProbe section from the /home/student/DO316/labs/review-cr3/readiness.yaml file that the lab command prepares.

      apiVersion: kubevirt.io/v1
      kind: VirtualMachine
      metadata:
      ...output omitted...
      spec:
        ...output omitted...
        template:
          metadata:
          ...output omitted...
          spec:
            domain:
              ...output omitted...
            readinessProbe:
              httpGet:
                path: /cgi-bin/health
                port: 80
              initialDelaySeconds: 60
              periodSeconds: 5
              timeoutSeconds: 2
              failureThreshold: 2
              successThreshold: 1
            evictionStrategy: LiveMigrate
            hostname: web1
      ...output omitted...
    2. Restart the web1 VM to apply the changes. Click ActionsRestart and wait for the VM to reach the Running status before proceeding.

    3. Repeat this step to add the readiness probe to the web2 VM.

  5. Attach the web1-documentroot PVC as a disk to the web1 VM by using the virtio interface. Repeat this step to attach the web2-documentroot PVC as a disk to the web2 VM.

    1. Use the OpenShift web console to stop the web1 VM. Navigate to VirtualizationVirtualMachines, select the web1 VM, and then click ActionsStop. Wait for the machine to stop.

    2. Navigate to the Configuration tab and select Disks. Click Add Disk. Complete the form by using the following information:

      Note

      If the Interface field is set to scsi and virtio is not available, then click Cancel and start over. The web interface takes a few seconds to detect that the VM is stopped.

      FieldValue
      Source Use an existing PVC
      PersistentVolumeClaim web1-documentroot
      Type Disk
      Interface virtio

      Click Add to attach the disk to the VM.

    3. Click ActionsStart to start the VM.

    4. Repeat this step to attach the web2-documentroot PVC to the web2 VM.

  6. Log in to the web1 VM console as the root user with redhatocp as the password. Then, run the /root/mount.sh script. Repeat this step for the web2 VM.

    1. Navigate to VirtualizationVirtualMachines, select the web1 VM, and then navigate to the Console tab.

    2. Log in to the VM console as the root user with redhat as the password.

    3. Run the /root/mount.sh command.

      [root@golden-web ~]# /root/mount.sh
      ...output omitted...
      Mount successful
    4. Log out of the VM console.

      [root@golden-web ~]# logout
    5. Repeat this step to mount the disk on the web2 VM.

  7. Add the tier: front label to the virt-launcher pods for the web1 amd web2 VMs.

    1. Navigate to VirtualizationVirtualMachines, select the web1 VM, and then navigate to the YAML tab.

    2. In the YAML editor, add the tier: front label in the .spec.template.metadata.labels path.

      ...output omitted...
      spec:
        dataVolumeTemplates:
        ...output omitted...
        template:
          metadata:
            creationTimestamp: null
            labels:
              tier: front
              flavor.template.kubevirt.io/small: "true"
              kubevirt.io/domain: golden-web
              kubevirt.io/size: small
              ...output omitted...
    3. Restart the web1 VM to apply the changes. Click ActionsRestart and wait for the VM to reach the Running status before proceeding.

    4. Repeat this step to add the same label to the web2 VM.

  8. Create a service named front with the ClusterIP type. Use the tier: front label for the selector. The service must listen on TCP port 80 and forward the traffic to the VM on port 80.

    1. From the OpenShift web console, navigate to NetworkingServices.

    2. Click Create Service and then use the YAML editor to create the front service with the following content:

      apiVersion: v1
      kind: Service
      metadata:
        name: front
        namespace: review-cr3
      spec:
        type: ClusterIP
        selector:
          tier: front
        ports:
          - protocol: TCP
            port: 80
            targetPort: 80

      The lab command prepares the /home/student/DO316/labs/review-cr3/service.yaml file as an example.

    3. Click Create to create the service. The Details tab for the front service is displayed.

    4. Confirm that the front service has two active endpoints. Click the Pods tab and confirm that list includes the virt-launcher pods for the web1 and web2 VMs.

  9. Create a route to access the web application that is running inside the VMs by using the http://front-review-cr3.apps.ocp4.example.com URL.

    1. From the web console, navigate to NetworkingRoutes. Click Create Route and complete the form by using the following information:

      FieldValue
      Name front
      Service front
      Target port 80→ 80 (TCP)

      Click Create.

    2. From a command-line window on the workstation machine, use the curl command to confirm that you can access the web application from outside the cluster.

      [student@workstation ~]$ curl http://front-review-cr3.apps.ocp4.example.com
      <!doctype html>
      <html lang="en">
      <head>
        <title>Test page web2</title>
      </head>
      
      <body>
        <h1>Test Page web2</h1>
         Welcome to web2.
      </body>
      </html>
    3. Rerun the curl command several times to confirm that the service dispatches the requests between the two VMs.

      [student@workstation ~]$ curl http://front-review-cr3.apps.ocp4.example.com/?[1-3]
      --curl--front-review-cr3.apps.ocp4.example.com/?1
      <!doctype html>
      <html lang="en">
      <head>
        <title>Test page web2</title>
      </head>
      
      <body>
        <h1>Test Page web2</h1>
         Welcome to web2.
      </body>
      </html>
      
      [2/3]: front-review-cr3.apps.ocp4.example.com/?2 --> <stdout>
      --curl--front-review-cr3.apps.ocp4.example.com/?2
      <!doctype html>
      <html lang="en">
      <head>
        <title>Test page web1</title>
      </head>
      
      <body>
        <h1>Test Page web1</h1>
         Welcome to web1.
      </body>
      </html>
      
      [3/3]: front-review-cr3.apps.ocp4.example.com/?3 --> <stdout>
      --curl--front-review-cr3.apps.ocp4.example.com/?3
      <!doctype html>
      <html lang="en">
      <head>
        <title>Test page web2</title>
      </head>
      
      <body>
        <h1>Test Page web2</h1>
         Welcome to web2.
      </body>
      </html>

Evaluation

As the student user on the workstation machine, use the lab command to grade your work. Correct any reported failures and rerun the command until successful.

[student@workstation ~]$ lab grade review-cr3

Finish

On the workstation machine, change to the student user home directory and use the lab command to complete this exercise. This step is important to ensure that resources from previous exercises do not impact upcoming exercises.

[student@workstation ~]$ lab finish review-cr3
Revision: do316-4.14-d8a6b80