Create health probes for VMs behind a service to enable Kubernetes to detect an application's availability and to modify service networking accordingly.
Outcomes
Declare readiness and liveness probes to VMs.
Add watchdog devices to VMs.
As the student user on the workstation machine, use the lab command to prepare your environment for this exercise, and to ensure that all required resources are available.
[student@workstation ~]$ lab start ha-probe
Instructions
The lab command creates the ha-probe namespace and starts four virtual machines in that namespace:
The www1 and www2 VMs, which host a web application
The watch1 VM, which hosts a test VM
The mariadb-server VM, which hosts a MariaDB database
The lab command creates a service that dispatches the client requests between the two web servers, and also creates a route resource so that the clients can reach the web application by using the http://www-ha-probe.apps.ocp4.example.com URL.
As the admin user, confirm that the four VMs are running in the ha-probe project.
From a command-line window, log in to your Red Hat OpenShift cluster as the admin user with redhatocp as the password.
[student@workstation ~]$ oc login -u admin -p redhatocp \
https://api.ocp4.example.com:6443
Login Successful
...output omitted...Change to the ha-probe project.
[student@workstation ~]$ oc project ha-probe
Now using project "ha-probe" on server "https://api.ocp4.example.com:6443".Confirm that the mariadb-server, watch1, www1, and www2 VMs are running.
[student@workstation ~]$ oc get vm
NAME AGE STATUS READY
mariadb-server 13m Running True
watch1 15m Running True
www1 18m Running True
www2 16m Running TrueReview the service and the route that the lab command deployed.
Confirm that Kubernetes dispatches the requests to the http://www-ha-probe.apps.ocp4.example.com URL between the www1 and www2 VMs.
Use the oc command to list the VMI resources in the ha-probe project.
Note the IP addresses of the www1 and www2 VM instances.
The IP addresses might differ in your environment.
[student@workstation ~]$oc get vmiNAME AGE PHASE IP NODENAME READY mariadb-server 12m Running 10.9.0.26 master02 True watch1 15m Running 10.8.2.50 worker02 Truewww118m Running10.8.2.47worker02 Truewww216m Running10.11.0.40worker01 True
Confirm that the front service exists.
[student@workstation ~]$ oc get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
front ClusterIP 172.30.49.84 <none> 80/TCP 12mList the service endpoints. Notice that the service uses the IP addresses of the two VMs for its endpoints.
[student@workstation ~]$oc get endpointsNAME ENDPOINTS AGE front10.11.0.40:80,10.8.2.47:8014m
Confirm that the www-front route exists.
Notice that the DNS name for the route is www-ha-probe.apps.ocp4.example.com.
This DNS name points to the front service.
[student@workstation ~]$oc get routeNAME HOST/PORT PATH SERVICES PORT ... www-frontwww-ha-probe.apps.ocp4.example.comfront80 ...
Run the curl command several times to confirm that Kubernetes dispatches the requests between the two VMs.
For distinguishing the VMs, the web servers return a web page that includes the hostname.
[student@workstation ~]$curl http://www-ha-probe.apps.ocp4.example.comWelcome towww2[student@workstation ~]$curl http://www-ha-probe.apps.ocp4.example.comWelcome towww2[student@workstation ~]$curl http://www-ha-probe.apps.ocp4.example.comWelcome towww1[student@workstation ~]$curl http://www-ha-probe.apps.ocp4.example.comWelcome towww1[student@workstation ~]$curl http://www-ha-probe.apps.ocp4.example.comWelcome towww2
The web servers that run inside the www1 and www2 VMs expose the /health endpoint to monitor the application status.
Use the web console to add a readiness probe to the www1 and www2 VMs that uses HTTP GET requests to test the /health endpoint.
Open a web browser and navigate to https://console-openshift-console.apps.ocp4.example.com.
Select and log in as the admin user with redhatocp as the password.
Navigate to → and then select the ha-probe project.
Select the www1 VM, and then navigate to the tab.
Use the YAML editor to declare the probe as follows, and then click .
You can copy and paste the readinessProbe section from the ~/DO316/labs/ha-probe/readiness.yaml file.
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: ...output omitted... spec: ...output omitted... template: metadata: ...output omitted... spec: architecture: amd64 domain: ...output omitted...readinessProbe:httpGet:path: /healthport: 80initialDelaySeconds: 10periodSeconds: 5timeoutSeconds: 2failureThreshold: 2successThreshold: 1evictionStrategy: LiveMigrate hostname: www1 ...output omitted...
Click → to restart the www1 VMI resource so Red Hat OpenShift Virtualization recognizes the new probe.
Wait a few minutes until the VMI is again in the Running state.
Repeat this process to add the readiness probe to the www2 VM.
Test the probe by stopping the httpd service running inside the www1 VM.
Change to the command-line window, and run the ~/DO316/labs/ha-probe/loop.sh script.
This script runs the curl command in an infinite loop.
Leave the command running.
[student@workstation ~]$ ~/DO316/labs/ha-probe/loop.sh
Welcome to www2
Welcome to www2
Welcome to www1
Welcome to www1
Welcome to www2
Welcome to www2
Welcome to www1
...output omitted...Notice that Kubernetes dispatches the requests between the two VMs.
From the OpenShift web console, navigate to → .
Select the www2 VM and then navigate to the tab.
Log in as the root user with redhat as the password.
Stop the httpd service.
[root@www2 ~]# systemctl stop httpdDo not close the console.
Switch back to the command-line window on the workstation machine.
After a few seconds, the output of the loop.sh script shows that Kubernetes sends the requests only to the www1 VM.
Press Ctrl+C to stop the script.
...output omitted... Welcome to www2 Welcome to www2 Welcome to www1 Welcome to www2 Welcome towww1Welcome towww1Welcome towww1Welcome towww1Welcome towww1Welcome towww1^C
Use the oc command to confirm that the front service has only one remaining endpoint.
[student@workstation ~]$oc get endpointsNAME ENDPOINTS AGE front10.8.2.51:8028m
Switch back to the www2 VM console that is running inside the web browser and restart the httpd service.
Log out of the console when done.
[root@www2 ~]#systemctl start httpd[root@www2 ~]#logout
Return to the command line on the workstation machine.
Run the ~/DO316/labs/ha-probe/loop.sh script to confirm that Kubernetes dispatches the requests between the two VMs again.
Press Ctrl+C to stop the script.
[student@workstation ~]$~/DO316/labs/ha-probe/loop.shWelcome to www2 Welcome to www2 Welcome to www1 Welcome to www1 Welcome to www2 Welcome to www2^C
Use the oc command to confirm that the front service has two endpoints.
[student@workstation ~]$oc get endpointsNAME ENDPOINTS AGE front10.11.0.45:80,10.8.2.51:8030m
The MariaDB database that is running inside the mariadb-server VM listens on TCP port 3306.
Add a liveness probe that tests the service by sending requests to the TCP socket.
Change to the OpenShift web console window.
Navigate to → .
Select the mariadb-server VM, and then navigate to the tab.
Use the YAML editor to declare the probe as follows, and then click .
You can copy and paste the livenessProbe section from the ~/DO316/labs/ha-probe/liveness.yaml file.
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: ...output omitted... spec: ...output omitted... template: metadata: ...output omitted... spec: architecture: amd64 domain: ...output omitted...livenessProbe:tcpSocket:port: 3306initialDelaySeconds: 10periodSeconds: 5evictionStrategy: LiveMigrate hostname: mariadb-server ...output omitted...
Click → to restart the mariadb-server VMI resource so Red Hat OpenShift Virtualization recognizes the new probe.
Wait a few minutes until the VMI is again in the Running state.
Test the probe by stopping the mysql service running inside the mariadb-server VM.
From the OpenShift web console, navigate to the tab.
Log in as the root user with redhat as the password.
Stop the mysql service.
[root@mariadb-server ~]# systemctl stop mysqlBecause the liveness probe fails, the VM restarts after a few seconds.
Add a watchdog device to the watch1 VM.
Navigate to → .
Select the watch1 VM, and then navigate to the tab.
Use the YAML editor to declare the probe as follows, and then click .
You can copy and paste the watchdog section from the ~/DO316/labs/ha-probe/watchdog.yaml file.
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: ...output omitted... spec: ...output omitted... template: metadata: ...output omitted... spec: architecture: amd64 domain: cpu: ...output omitted... devices: disks: - bootOrder: 1 disk: bus: virtio name: watch1 interfaces: - macAddress: 02:7e:be:00:00:02 masquerade: {} name: default networkInterfaceMultiqueue: true rng: {}watchdog:i6300esb:action: poweroffname: testwatchdogmachine: type: pc-q35-rhel8.4.0 ...output omitted...
Click → to restart the watch1 VMI resource so Red Hat OpenShift Virtualization recognizes the new probe.
Wait a few minutes until the VMI is again in the Running state.
Test the watchdog device.
Navigate to the tab.
Log in as the root user with redhat as the password.
The Linux kernel regularly resets the watchdog hardware timer by writing to the /dev/watchdog device file.
However, if the kernel detects that another process accesses the file in write mode, then it stops refreshing the timer.
The kernel no longer considers it necessary to manage the timer because the new process took over this function.
The watchdog service is an example of a program that writes to the /dev/watchdog device file.
Because the RPM package that provides the watchdog service is not available on the watch1 VM, use the echo command to simulate a process that opens the /dev/watchdog device file in write mode.
The kernel stops refreshing the timer.
[root@watch1 ~]# echo > /dev/watchdog
[ 156.709341] watchdog: watchdog0: watchdog did not stop!The preceding echo command returns the kernel error message about the watchdog not being reset.
After 30 seconds, the Intel 6300ESB emulated chipset restarts the VM.