Bookmark this page

Guided Exercise: Configure Health Probes for Virtual Machines

Create health probes for VMs behind a service to enable Kubernetes to detect an application's availability and to modify service networking accordingly.

Outcomes

  • Declare readiness and liveness probes to VMs.

  • Add watchdog devices to VMs.

As the student user on the workstation machine, use the lab command to prepare your environment for this exercise, and to ensure that all required resources are available.

[student@workstation ~]$ lab start ha-probe

Instructions

The lab command creates the ha-probe namespace and starts four virtual machines in that namespace:

  • The www1 and www2 VMs, which host a web application

  • The watch1 VM, which hosts a test VM

  • The mariadb-server VM, which hosts a MariaDB database

The lab command creates a service that dispatches the client requests between the two web servers, and also creates a route resource so that the clients can reach the web application by using the http://www-ha-probe.apps.ocp4.example.com URL.

  1. As the admin user, confirm that the four VMs are running in the ha-probe project.

    1. From a command-line window, log in to your Red Hat OpenShift cluster as the admin user with redhatocp as the password.

      [student@workstation ~]$ oc login -u admin -p redhatocp \
        https://api.ocp4.example.com:6443
      Login Successful
      ...output omitted...
    2. Change to the ha-probe project.

      [student@workstation ~]$ oc project ha-probe
      Now using project "ha-probe" on server "https://api.ocp4.example.com:6443".
    3. Confirm that the mariadb-server, watch1, www1, and www2 VMs are running.

      [student@workstation ~]$ oc get vm
      NAME             AGE   STATUS    READY
      mariadb-server   13m   Running   True
      watch1           15m   Running   True
      www1             18m   Running   True
      www2             16m   Running   True
  2. Review the service and the route that the lab command deployed. Confirm that Kubernetes dispatches the requests to the http://www-ha-probe.apps.ocp4.example.com URL between the www1 and www2 VMs.

    1. Use the oc command to list the VMI resources in the ha-probe project. Note the IP addresses of the www1 and www2 VM instances. The IP addresses might differ in your environment.

      [student@workstation ~]$ oc get vmi
      NAME             AGE   PHASE     IP           NODENAME   READY
      mariadb-server   12m   Running   10.9.0.26    master02   True
      watch1           15m   Running   10.8.2.50    worker02   True
      www1             18m   Running   10.8.2.47    worker02   True
      www2             16m   Running   10.11.0.40   worker01   True
    2. Confirm that the front service exists.

      [student@workstation ~]$ oc get service
      NAME    TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
      front   ClusterIP   172.30.49.84   <none>        80/TCP    12m
    3. List the service endpoints. Notice that the service uses the IP addresses of the two VMs for its endpoints.

      [student@workstation ~]$ oc get endpoints
      NAME    ENDPOINTS                    AGE
      front   10.11.0.40:80,10.8.2.47:80   14m
    4. Confirm that the www-front route exists. Notice that the DNS name for the route is www-ha-probe.apps.ocp4.example.com. This DNS name points to the front service.

      [student@workstation ~]$ oc get route
      NAME        HOST/PORT                            PATH   SERVICES   PORT ...
      www-front   www-ha-probe.apps.ocp4.example.com          front      80   ...
    5. Run the curl command several times to confirm that Kubernetes dispatches the requests between the two VMs. For distinguishing the VMs, the web servers return a web page that includes the hostname.

      [student@workstation ~]$ curl http://www-ha-probe.apps.ocp4.example.com
      Welcome to www2
      [student@workstation ~]$ curl http://www-ha-probe.apps.ocp4.example.com
      Welcome to www2
      [student@workstation ~]$ curl http://www-ha-probe.apps.ocp4.example.com
      Welcome to www1
      [student@workstation ~]$ curl http://www-ha-probe.apps.ocp4.example.com
      Welcome to www1
      [student@workstation ~]$ curl http://www-ha-probe.apps.ocp4.example.com
      Welcome to www2
  3. The web servers that run inside the www1 and www2 VMs expose the /health endpoint to monitor the application status. Use the web console to add a readiness probe to the www1 and www2 VMs that uses HTTP GET requests to test the /health endpoint.

    1. Open a web browser and navigate to https://console-openshift-console.apps.ocp4.example.com. Select htpasswd_provider and log in as the admin user with redhatocp as the password.

    2. Navigate to VirtualizationVirtualMachines and then select the ha-probe project. Select the www1 VM, and then navigate to the YAML tab.

    3. Use the YAML editor to declare the probe as follows, and then click Save. You can copy and paste the readinessProbe section from the ~/DO316/labs/ha-probe/readiness.yaml file.

      apiVersion: kubevirt.io/v1
      kind: VirtualMachine
      metadata:
      ...output omitted...
      spec:
        ...output omitted...
        template:
          metadata:
          ...output omitted...
          spec:
            architecture: amd64
            domain:
              ...output omitted...
            readinessProbe:
              httpGet:
                path: /health
                port: 80
              initialDelaySeconds: 10
              periodSeconds: 5
              timeoutSeconds: 2
              failureThreshold: 2
              successThreshold: 1
            evictionStrategy: LiveMigrate
            hostname: www1
      ...output omitted...
    4. Click ActionsRestart to restart the www1 VMI resource so Red Hat OpenShift Virtualization recognizes the new probe.

    5. Wait a few minutes until the VMI is again in the Running state.

    6. Repeat this process to add the readiness probe to the www2 VM.

  4. Test the probe by stopping the httpd service running inside the www1 VM.

    1. Change to the command-line window, and run the ~/DO316/labs/ha-probe/loop.sh script. This script runs the curl command in an infinite loop.

      Leave the command running.

      [student@workstation ~]$ ~/DO316/labs/ha-probe/loop.sh
      Welcome to www2
      Welcome to www2
      Welcome to www1
      Welcome to www1
      Welcome to www2
      Welcome to www2
      Welcome to www1
      ...output omitted...

      Notice that Kubernetes dispatches the requests between the two VMs.

    2. From the OpenShift web console, navigate to VirtualizationVirtualMachines. Select the www2 VM and then navigate to the Console tab.

    3. Log in as the root user with redhat as the password.

    4. Stop the httpd service.

      [root@www2 ~]# systemctl stop httpd

      Do not close the console.

    5. Switch back to the command-line window on the workstation machine. After a few seconds, the output of the loop.sh script shows that Kubernetes sends the requests only to the www1 VM.

      Press Ctrl+C to stop the script.

      ...output omitted...
      Welcome to www2
      Welcome to www2
      Welcome to www1
      Welcome to www2
      Welcome to www1
      Welcome to www1
      Welcome to www1
      Welcome to www1
      Welcome to www1
      Welcome to www1
      ^C
    6. Use the oc command to confirm that the front service has only one remaining endpoint.

      [student@workstation ~]$ oc get endpoints
      NAME    ENDPOINTS      AGE
      front   10.8.2.51:80   28m
    7. Switch back to the www2 VM console that is running inside the web browser and restart the httpd service. Log out of the console when done.

      [root@www2 ~]# systemctl start httpd
      [root@www2 ~]# logout
    8. Return to the command line on the workstation machine. Run the ~/DO316/labs/ha-probe/loop.sh script to confirm that Kubernetes dispatches the requests between the two VMs again.

      Press Ctrl+C to stop the script.

      [student@workstation ~]$ ~/DO316/labs/ha-probe/loop.sh
      Welcome to www2
      Welcome to www2
      Welcome to www1
      Welcome to www1
      Welcome to www2
      Welcome to www2
      ^C
    9. Use the oc command to confirm that the front service has two endpoints.

      [student@workstation ~]$ oc get endpoints
      NAME    ENDPOINTS                    AGE
      front   10.11.0.45:80,10.8.2.51:80   30m
  5. The MariaDB database that is running inside the mariadb-server VM listens on TCP port 3306. Add a liveness probe that tests the service by sending requests to the TCP socket.

    1. Change to the OpenShift web console window. Navigate to VirtualizationVirtualMachines. Select the mariadb-server VM, and then navigate to the YAML tab.

    2. Use the YAML editor to declare the probe as follows, and then click Save. You can copy and paste the livenessProbe section from the ~/DO316/labs/ha-probe/liveness.yaml file.

      apiVersion: kubevirt.io/v1
      kind: VirtualMachine
      metadata:
      ...output omitted...
      spec:
        ...output omitted...
        template:
          metadata:
          ...output omitted...
          spec:
            architecture: amd64
            domain:
              ...output omitted...
            livenessProbe:
              tcpSocket:
                port: 3306
              initialDelaySeconds: 10
              periodSeconds: 5
            evictionStrategy: LiveMigrate
            hostname: mariadb-server
      ...output omitted...
    3. Click ActionsRestart to restart the mariadb-server VMI resource so Red Hat OpenShift Virtualization recognizes the new probe.

    4. Wait a few minutes until the VMI is again in the Running state.

  6. Test the probe by stopping the mysql service running inside the mariadb-server VM.

    1. From the OpenShift web console, navigate to the Console tab.

    2. Log in as the root user with redhat as the password.

    3. Stop the mysql service.

      [root@mariadb-server ~]# systemctl stop mysql
    4. Because the liveness probe fails, the VM restarts after a few seconds.

  7. Add a watchdog device to the watch1 VM.

    1. Navigate to VirtualizationVirtualMachines. Select the watch1 VM, and then navigate to the YAML tab.

    2. Use the YAML editor to declare the probe as follows, and then click Save. You can copy and paste the watchdog section from the ~/DO316/labs/ha-probe/watchdog.yaml file.

      apiVersion: kubevirt.io/v1
      kind: VirtualMachine
      metadata:
      ...output omitted...
      spec:
        ...output omitted...
        template:
          metadata:
          ...output omitted...
          spec:
            architecture: amd64
            domain:
              cpu:
                ...output omitted...
              devices:
                disks:
                - bootOrder: 1
                  disk:
                    bus: virtio
                  name: watch1
                interfaces:
                - macAddress: 02:7e:be:00:00:02
                  masquerade: {}
                  name: default
                networkInterfaceMultiqueue: true
                rng: {}
                watchdog:
                  i6300esb:
                    action: poweroff
                  name: testwatchdog
              machine:
                type: pc-q35-rhel8.4.0
      ...output omitted...
    3. Click ActionsRestart to restart the watch1 VMI resource so Red Hat OpenShift Virtualization recognizes the new probe.

    4. Wait a few minutes until the VMI is again in the Running state.

  8. Test the watchdog device.

    1. Navigate to the Console tab.

    2. Log in as the root user with redhat as the password.

    3. The Linux kernel regularly resets the watchdog hardware timer by writing to the /dev/watchdog device file. However, if the kernel detects that another process accesses the file in write mode, then it stops refreshing the timer. The kernel no longer considers it necessary to manage the timer because the new process took over this function.

      The watchdog service is an example of a program that writes to the /dev/watchdog device file. Because the RPM package that provides the watchdog service is not available on the watch1 VM, use the echo command to simulate a process that opens the /dev/watchdog device file in write mode. The kernel stops refreshing the timer.

      [root@watch1 ~]# echo > /dev/watchdog
      [  156.709341] watchdog: watchdog0: watchdog did not stop!

      The preceding echo command returns the kernel error message about the watchdog not being reset. After 30 seconds, the Intel 6300ESB emulated chipset restarts the VM.

Finish

On the workstation machine, use the lab command to complete this exercise. This step is important to ensure that resources from previous exercises do not impact upcoming exercises.

[student@workstation ~]$ lab finish ha-probe

Revision: do316-4.14-d8a6b80