Bookmark this page

Lab: Cluster Administration

Use Red Hat OpenShift GitOps for cluster administration.

Configure an OIDC identity provider.

Configure a one-time backup with OADP and restore from it.

Configure OpenShift Logging for short-term log retention and aggregation.

Configure alert forwarding and inspect alerts.

Outcomes

  • Configure Red Hat Single Sign-On (SSO) as an OIDC identity provider (IdP) for OpenShift by using Red Hat OpenShift GitOps.

  • Deploy the OpenShift Logging operator and configure it to use Loki as the log store, Vector as the collector, and the OpenShift web UI for log visualization.

  • Add permission for a user to read the logs from a project.

  • Back up an application and restore the application to a different namespace.

  • Configure alert forwarding.

  • Use monitoring to identify an application problem.

As the student user on the workstation machine, use the lab command to prepare your environment for this exercise, and to ensure that all required resources are available.

[student@workstation ~]$ lab start compreview-review1

Specifications

  • Use GitOps to configure the cluster to use Red Hat SSO for authentication. Red Hat SSO provides the filipmansur user, with the redhat_sso password, in the ocp_rhsso client. This user is part of the etherpad_devs group.

    Use the following parameters to configure Red Hat SSO in OpenShift:

    AttributeValue
    Name RHSSO_OIDC
    Client ID ocp_rhsso
    Client secret QGEP6zoLo6BUGbib5oCkGwtZ8EAlmMgW
    • To use OpenShift GitOps, you can use the local admin user with the credentials from the openshift-gitops-cluster secret in the openshift-gitops namespace. The operator adds a link to the default instance in the application menu of the OpenShift console.

    • The classroom has a GitLab instance that you can use to create any necessary Git repositories. GitLab is available at the https://git.ocp4.example.com URL. You can use the developer user with d3v3lop3r as the password. The lab scripts expect a compreview-review1 repository for cleanup.

      The lab scripts configure the username, email, and authentication for Git in the workstation machine.

    • Argo CD accesses only trusted repositories. GitLab uses a certificate that is signed by the classroom CA. This CA is included in the certificates that are trusted by the cluster. You can use the config.openshift.io/inject-trusted-cabundle label to inject the cluster trusted certificates into a configuration map, and then configure Argo CD to trust the certificate. The injected certificate is in the ca-bundle.crt file in the configuration map, and Argo CD uses the /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem for trusted certificates in the repository server container.

    • The ~/DO380/labs/compreview-review1/sso_config.yml file contains an incomplete authentication configuration.

  • The lab scripts deploy Etherpad in the compreview-review1 namespace. The Etherpad resources have the app.kubernetes.io/name label with etherpad as the value, and the supporting database resources have the app.kubernetes.io/name label with mariadb as the value.

    • Create an etherpad-backup backup schedule. The ~/DO380/labs/compreview-review1/schedule-db-backup.yml file contains an example to create the schedule.

    • You can define an alias to access the velero binary by using the following command:

      [user@host ~]$ alias velero=\
        'oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'
    • Trigger an immediate backup from the schedule, and restore the backup to the etherpad-test-restore namespace.

  • Configure the deployed OpenShift Logging operator for short-term log retention and aggregation. The Loki operator is deployed in the cluster.

    • An S3 bucket is available for you, in the lab environment, to configure as log storage for Loki. The bucket information and credentials are available in the ~/DO380/labs/compreview-review1/s3bucket.env file on the workstation machine.

    • Create a LokiStack instance with the 1x.demo size. Use logging-loki as the name for the LokiStack resource. You can find an incomplete example for the resource in the ~/DO380/labs/compreview-review1/logging/lokistack.yaml file.

    • Use Loki as the log store, Vector as the collector, and the OpenShift web UI for log visualization. You can find an incomplete example for the resource in the ~/DO380/labs/compreview-review1/logging/clusterlogging.yaml file.

    • Configure the Logging operator to include the audit logs, by using the ClusterLogForwarder resource. You can find an incomplete example for the resource in the ~/DO380/labs/compreview-review1/logging/forwarder.yaml file.

    • Users in the etherpad_devs group have read and write permissions on the compreview-review1 namespace. Apply the necessary permissions so users in that group also have access to the application logs for that namespace. Red Hat SSO provides the filipmansur user from the etherpad_devs group. You can find an incomplete example for the role binding in the ~/DO380/labs/compreview-review1/logging/group-role.yaml file.

  • Configure monitoring to send alerts by using a webhook.

    • The lab scripts deploy a webhook debugger service to the utility machine. This debugger starts a web server on port 8000 that prints the received payloads in the /home/student/persistent_alert file in the utility machine.

    • OpenShift can send webhooks to this debugger at the http://utility.lab.example.com:8000 URL.

    • This exercise generates alerts with alertname labels that start with the Persistent text. You can use a regular expression filter so that the debugger prints only alerts for this exercise.

    • You can reduce the group and repeat intervals of the Alertmanager configuration to receive alerts sooner.

    • The debugger registers successful receipt of alerts for grading.

  • Both the original Etherpad deployment and the restore deployment trigger critical monitoring alerts. Review the two critical alerts that the Etherpad deployments fire, and solve them.

  1. Install the OpenShift GitOps operator from OperatorHub.

    1. Open a web browser and navigate to https://console-openshift-console.apps.ocp4.example.com

    2. Click Red Hat Identity Management and log in as the admin user with redhatocp as the password.

    3. Navigate to OperatorsOperatorHub.

    4. Click Red Hat OpenShift GitOps, and then click Install.

    5. Review the default configuration and click Install. The OLM can take a few minutes to install the operator. Click View Operator to navigate to the operator details.

  2. Configure the default Argo CD instance to trust the classroom certificate to access repositories. Argo CD accesses only trusted repositories. You can use the config.openshift.io/inject-trusted-cabundle label to inject the classroom certificate into a configuration map, and then configure Argo CD to trust the certificate. The injected certificate is in the ca-bundle.crt file in the configuration map, and Argo CD uses the /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem path for trusted certificates in the repository server container.

    1. Use the terminal to log in to the OpenShift cluster as the admin user with redhatocp as the password.

      [student@workstation ~]$ oc login -u admin -p redhatocp \
        https://api.ocp4.example.com:6443
      ...output omitted...
    2. Create a cluster-root-ca-bundle configuration map in the openshift-gitops namespace.

      [student@workstation ~]$ oc create configmap -n openshift-gitops \
        cluster-root-ca-bundle
    3. Add the config.openshift.io/inject-trusted-cabundle label to the configuration map with the true value. OpenShift injects the cluster certificates into a configuration map with this label. This bundle contains the signing certificate for the classroom GitLab instance.

      [student@workstation ~]$ oc label configmap -n openshift-gitops \
        cluster-root-ca-bundle config.openshift.io/inject-trusted-cabundle=true
      configmap/cluster-root-ca-bundle labeled
    4. Edit the Argo CD default instance to inject the certificates.

      You can use the following command to edit the resource:

      [student@workstation ~]$ oc edit argocd -n openshift-gitops openshift-gitops

      Edit the resource to mount the ca-bundle.crt file from the configuration map in the /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem path of the repository server container.

      ...output omitted...
      spec:
      ...output omitted...
        repo:
          resources:
            limits:
              cpu: "1"
              memory: 1Gi
            requests:
              cpu: 250m
              memory: 256Mi
          volumeMounts:
          - mountPath: /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
            name: cluster-root-ca-bundle
            subPath: ca-bundle.crt
          volumes:
          - configMap:
              name: cluster-root-ca-bundle
            name: cluster-root-ca-bundle
        resourceExclusions: |
      ...output omitted...
  3. Create a compreview-review1 public repository for the authentication configuration in the classroom GitLab at https://git.ocp4.example.com. Use the developer GitLab user with d3v3lop3r as the password.

    1. Open a web browser and navigate to https://git.ocp4.example.com. Log in as the developer user with d3v3lop3r as the password.

    2. Click New project, and then click Create blank project. Use compreview-review1 as the project slug (repository name), select the Public visibility level, and use the default values for all other fields. Click Create project.

  4. Populate the repository with the OAuth CR file so OpenShift synchronizes users from the Red Hat SSO OIDC client.

    1. Click Clone, and then copy the https://git.ocp4.example.com/developer/compreview-review1.git URL.

    2. In the terminal window, change to the ~/DO380/labs/compreview-review1/ directory.

      [student@workstation ~]$ cd ~/DO380/labs/compreview-review1
    3. In a terminal, run the following command to clone the new repository.

      [student@workstation compreview-review1]$ git clone \
        https://git.ocp4.example.com/developer/compreview-review1.git
      Cloning into 'compreview-review1'...
      ...output omitted...
    4. Change to the cloned repository directory.

      [student@workstation compreview-review1]$ cd compreview-review1
    5. Create the rhsso-oidc-client-secret OpenShift secret for the Red Hat SSO client secret by using the client secret from Red Hat SSO parameters in the table.

      [student@workstation compreview-review1]$ oc create secret generic \
        rhsso-oidc-client-secret \
        --from-literal clientSecret=QGEP6zoLo6BUGbib5oCkGwtZ8EAlmMgW \
        -n openshift-config
      secret/rhsso-oidc-client-secret created
    6. Create the OAuth CR YAML file. You can find an example for the CR in the ~/DO380/labs/compreview-review1/sso_config.yml file. The YAML file includes an LDAP IdP that you must preserve, because it provides the admin and developer users. Do not remove the LDAP IdP, and add the OIDC IdP for Red Hat SSO.

      apiVersion: config.openshift.io/v1
      kind: OAuth
      metadata:
       name: cluster
       annotations:
         argocd.argoproj.io/sync-options: ServerSideApply=true,Validate=false
      spec:
        identityProviders:
        - ldap:
      ...output omitted...
        - openID:
            claims:
              email:
                - email
              name:
                - name
              preferredUsername:
                - preferred_username
              groups:
                - groups
            clientID: ocp_rhsso
            clientSecret:
              name: rhsso-oidc-client-secret
            extraScopes: []
            issuer: >-
              https://sso.ocp4.example.com:8080/auth/realms/internal_devs
          mappingMethod: claim
          name: RHSSO_OIDC
          type: OpenID
    7. Copy the modified sso_config.yml file to the repository.

      [student@workstation compreview-review1]$ cp ../sso_config.yml .
    8. Add the sso_config.yml file to the Git index.

      [student@workstation compreview-review1]$ git add sso_config.yml
    9. Commit the changes.

      [student@workstation compreview-review1]$ git commit -m "Add SSO to OAuth"
      [main f3c0ef1] Add SSO to OAuth
      ...output omitted...
    10. Push the changes to the repository. The developer user is configured in the lab scripts as the default Git user.

      [student@workstation compreview-review1]$ git push
      ...output omitted...
    11. Change to the ~/DO380/labs/compreview-review1 directory.

      [student@workstation compreview-review1]$ cd ..
  5. Log in to the default Argo CD instance with the default user.

    1. Extract the password for the local admin user.

      In the web console, navigate to WorkloadsSecrets, search for the openshift-gitops-cluster secret in all namespaces, and then click Reveal values in the detail page to view the password. Alternatively, you can execute the oc extract -n openshift-gitops secret/openshift-gitops-cluster --to=- command in the terminal to view the password.

    2. Open a separate tab and open the default Argo CD instance. You can use the application menu to access this URL, by clicking Cluster Argo CD in the application menu, or use the https://openshift-gitops-server-openshift-gitops.apps.ocp4.example.com URL.

      The browser displays a warning because Argo CD uses a self-signed certificate. Trust the certificate. Argo CD might take a few minutes before showing the login page.

    3. Log in to Argo CD by using the admin user and the password from the previous step, and click SIGN IN instead of LOG IN VIA OPENSHIFT.

  6. Create an Argo CD application with the repository and observe the results.

    1. In the Argo CD browser tab, click CREATE APPLICATION.

    2. Create an application with the information in the following table:

      FieldValue
      Application Name sso-oauth
      Project Name default
      Repository URL https://git.ocp4.example.com/developer/compreview-review1.git
      Path .
      Cluster URL https://kubernetes.default.svc

      Then, click CREATE.

    3. Click sso-oauth to view the application.

    4. Click SYNC to display the synchronization panel, and then click SYNCHRONIZE.

      Argo CD starts synchronizing the application. After about one minute, the console shows the application as synchronized and healthy.

    5. Change to the terminal window and verify the status for the OAuth pods. Wait for the OAuth pods to be redeployed. It can take a few minutes for OpenShift to redeploy the pods.

      [student@workstation compreview-review1]$ watch oc get pods \
        -n openshift-authentication
      Every 2.0s: oc get pods -n openshift-authentication  workstation: Thu Feb  1 06:11:52 2024
      
      NAME                               READY   STATUS    RESTARTS   AGE
      oauth-openshift-69d79b5598-85knt   1/1     Running   0          85s
      oauth-openshift-69d79b5598-q2nwk   1/1     Running   0          58s
      oauth-openshift-69d79b5598-sj2fj   1/1     Running   0          114s
      ^C
    6. Verify that you can log in to the cluster as the filipmansur user with redhat_sso as the password.

      [student@workstation compreview-review1]$ oc login -u filipmansur -p redhat_sso
      Login successful.
      ...output omitted...
    7. Log in to the OpenShift cluster as the admin user.

      [student@workstation compreview-review1]$ oc login -u admin -p redhatocp
      ...output omitted...
  7. Create an etherpad-backup backup schedule of Etherpad. The ~/DO380/labs/compreview-review1/schedule-db-backup.yml file contains an example to create the backup definition. The backup must include resources with the app.kubernetes.io/name label with etherpad as the value, and the app label with mariadb as the value.

    1. Edit the example to match the following text.

      apiVersion: velero.io/v1
      kind: Schedule
      metadata:
        name: etherpad-backup
        namespace: openshift-adp
      spec:
        schedule: "0 7 * * 0"
        paused: true
        template:
          ttl: 360h0m0s
          includedNamespaces:
          - compreview-review1
          orLabelSelectors:
          - matchLabels:
              app.kubernetes.io/name: etherpad
          - matchLabels:
              app.kubernetes.io/name: mariadb
          includedResources:
          - deployment
          - route
          - service
          - pvc
          - persistentvolume
          - secret
          - service
          - namespace
          hooks:
      ...output omitted...
    2. Apply the configuration for the schedule resource.

      [student@workstation compreview-review1]$ oc apply -f schedule-db-backup.yml
      schedule.velero.io/etherpad-backup created
    3. Create an alias to access the velero binary from the Velero deployment in the openshift-adp namespace.

      [student@workstation compreview-review1]$ alias velero='\
        oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'
    4. Verify the status of the schedule with the velero command.

      [student@workstation compreview-review1]$ velero get schedule
      NAME              STATUS   ...   PAUSED
      etherpad-backup   New      ...   true
  8. Trigger an immediate backup from the scheduled backup, and restore it to the etherpad-test-restore project.

    The backup and restore should complete without errors or warnings.

    1. Use the velero command to start a backup by using the schedule definition from the previous step. Note the name of the backup that the command creates, to use in the next step.

      [student@workstation compreview-review1]$ velero backup create \
        --from-schedule etherpad-backup
      INFO[0000] No Schedule.template.metadata.labels set - using Schedule.labels for backup object  backup=openshift-adp/etherpad-backup-20240131113134 labels="map[]"
      Creating backup from schedule, all other filters are ignored.
      Backup request "etherpad-backup-20240131113134" submitted successfully.
      Run velero backup describe etherpad-backup-20240131113134 or velero backup logs etherpad-backup-20240131113134 for more details.

      Note

      The S3 object storage that is configured in the lab environment uses a custom certificate that is signed with the OpenShift service CA. You must add the CA certificate to the velero backup describe --details and velero backup logs commands as follows:

      [user@host]$ velero backup logs \
        --cacert=/run/secrets/kubernetes.io/serviceaccount/service-ca.crt \
        etherpad-backup-20231115113447
    2. Monitor the status of the backup and verify that the backup ends with the Completed status. The backup process takes several minutes.

      [student@workstation compreview-review1]$ velero get backup
    3. Restore the backup to the etherpad-test-restore namespace.

      [student@workstation compreview-review1]$ velero restore create etherpad-test \
        --from-backup etherpad-backup-20240129143859 \
        --namespace-mappings compreview-review1:etherpad-test-restore
      Restore request "etherpad-test" submitted successfully.
      Run velero restore describe etherpad-test or velero restore logs etherpad-test for more details.
    4. Use the velero command to get the status of the restore. Monitor the output to verify that the restore status is Completed. The restore process takes several minutes.

      [student@workstation compreview-review1]$ velero get restore
      NAME            ...   STATUS      ...
      etherpad-test   ...   Completed   ...
    5. Review the restored resources in the etherpad-test-restore project.

      [student@workstation compreview-review1]$ oc get -n etherpad-test-restore \
        pod,deployment,route
      NAME                            READY   STATUS    RESTARTS      AGE
      pod/etherpad-58f64cfb7d-8qrws   1/1     Running   3 (44s ago)   84s
      pod/mariadb-66dc48b5f7-svlst    1/1     Running   0             84s
      
      NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
      deployment.apps/etherpad   1/1     1            1           84s
      deployment.apps/mariadb    1/1     1            1           84s
      
      NAME   HOST/PORT                                              ...
      ...    etherpad-etherpad-test-restore.apps.ocp4.example.com   ...

      The pod for the etherpad deployment requires more time to be ready, because the pod requires the database deployment to be ready first.

    6. Visit the https://etherpad-etherpad-test-restore.apps.ocp4.example.com URL to verify that the restored Etherpad works.

  9. Use the bucket credentials to create the logging-loki-odf secret in the openshift-logging namespace. Create a LokiStack resource YAML file for an instance called logging-loki in the openshift-logging namespace. Create a ClusterLogging resource YAML file by using the loki log store, the vector collector, and the ocp-console visualization type.

    1. Use the ~/DO380/labs/compreview-review1/s3bucket.env environment file to create the logging-loki-odf secret in the openshift-logging namespace.

      [student@workstation compreview-review1]$ oc create secret generic \
        logging-loki-odf -n openshift-logging --from-env-file=s3bucket.env
      secret/logging-loki-odf created
    2. Create a LokiStack resource YAML file for an instance called logging-loki in the openshift-logging namespace. This instance uses the bucket as the storage. You can find an incomplete example for the resource in the ~/DO380/labs/compreview-review1/logging/lokistack.yaml file.

      apiVersion: loki.grafana.com/v1
      kind: LokiStack
      metadata:
        name: logging-loki
        namespace: openshift-logging
      spec:
        size: 1x.demo
        storage:
          secret:
            name: logging-loki-odf
            type: s3
          tls:
            caName: openshift-service-ca.crt
        storageClassName: ocs-external-storagecluster-ceph-rbd
        tenants:
          mode: openshift-logging
    3. Create the LokiStack resource.

      [student@workstation compreview-review1]$ oc create -f logging/lokistack.yaml
      lokistack.loki.grafana.com/logging-loki created
    4. Create a ClusterLogging resource YAML file by using the loki log store, the vector collector, and the ocp-console visualization type. You can find an incomplete example for the resource in the ~/DO380/labs/compreview-review1/logging/clusterlogging.yaml file.

      apiVersion: logging.openshift.io/v1
      kind: ClusterLogging
      metadata:
        name: instance
        namespace: openshift-logging
      spec:
        managementState: Managed
        logStore:
          type: lokistack
          lokistack:
            name: logging-loki
        collection:
          type: vector
        visualization:
          type: ocp-console
    5. Create the ClusterLogging resource.

      [student@workstation compreview-review1]$ oc create -f logging/clusterlogging.yaml
      clusterlogging.logging.openshift.io/instance created
    6. Verify that the ClusterLogging and LokiStack pods are up and running.

      [student@workstation compreview-review1]$ oc get pods -n openshift-logging
      NAME                                           READY   STATUS    RESTARTS   AGE
      cluster-logging-operator-554849f7dd-75hjk      1/1     Running   0          6m7s
      collector-2dqsh                                1/1     Running   0          18s
      collector-5fp2q                                1/1     Running   0          19s
      collector-c6zjv                                1/1     Running   0          19s
      collector-gggtc                                1/1     Running   0          13s
      collector-k5zs8                                1/1     Running   0          13s
      collector-wgz7q                                1/1     Running   0          12s
      logging-loki-compactor-0                       1/1     Running   0          83s
      logging-loki-distributor-c55478c4c-9kt2k       1/1     Running   0          83s
      logging-loki-gateway-75d6fccb68-krk49          2/2     Running   0          82s
      logging-loki-gateway-75d6fccb68-vghdg          2/2     Running   0          82s
      logging-loki-index-gateway-0                   1/1     Running   0          82s
      logging-loki-ingester-0                        1/1     Running   0          83s
      logging-loki-querier-6f7d8b7564-4glpp          1/1     Running   0          83s
      logging-loki-query-frontend-678dddf864-947hn   1/1     Running   0          83s
      logging-view-plugin-5b9b5b7bdc-zrp9t           1/1     Running   0          35s
  10. Enable the console plug-in for the OpenShift Logging operator. Verify that you have access to the logs.

    1. Change to the web console browser. Click OperatorsInstalled Operators, and select All Projects from the drop-down menu.

    2. Click Red Hat OpenShift Logging, click Console plugin, select Enable, and click Save.

    3. Reload the web console, and navigate to ObserveLogs. If the ObserveLogs menu is not available, then wait until the web console shows the Web console update is available message and reload the web console. You have access to logs for the application and infrastructure resources. By default, the ClusterLogging instance includes logs for the application and infrastructure, but not the audit logs. Observe the application logs, which are selected by default.

  11. Include the audit logs by creating a log forwarder for the application, infrastructure, and audit logs to the LokiStack resource.

    1. Change to the terminal window, and create an ClusterLogForwarder resource YAML file for a log forwarder called instance in the openshift-logging namespace. The log forwarder must forward the application, infrastructure, and audit logs. You can find an incomplete example for the resource in the ~/DO380/labs/compreview-review1/logging/forwarder.yaml file.

      apiVersion: logging.openshift.io/v1
      kind: ClusterLogForwarder
      metadata:
        name: instance
        namespace: openshift-logging
      spec:
        pipelines:
        - name: all-to-default
          inputRefs:
          - infrastructure
          - application
          - audit
          outputRefs:
          - default
    2. Create the ClusterLogForwarder resource.

      [student@workstation compreview-review1]$ oc create -f logging/forwarder.yaml
      clusterlogforwarder.logging.openshift.io/instance created
    3. Change to the web console browser and reload it. You have access to the audit logs.

  12. Give the filipmansur user permission to view the logs in the compreview-review1 project, and verify that the user has access to the logs. Give the permission to the filipmansur user by assigning the cluster-logging-application-view cluster role through the etherpad-devs group.

    1. Change to the terminal window.

    2. Review the required role to provide access to the application logs to the filipmansur user. You can find an example in the ~/DO380/labs/compreview-review1/logging/group-role.yaml file.

      apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding
      metadata:
        name: view-application-logs
        namespace: compreview-review1
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: cluster-logging-application-view
      subjects:
      - kind: Group
        name: etherpad-devs
        apiGroup: rbac.authorization.k8s.io
    3. Apply the role to the filipmansur user.

      [student@workstation compreview-review1]$ oc create -f \
        logging/group-role.yaml
      rolebinding.rbac.authorization.k8s.io/view-application-logs created
    4. Change to the browser window, open a private window, and navigate to https://console-openshift-console.apps.ocp4.example.com

    5. Click RHSSO_OIDC and log in as the filipmansur user with redhat_sso as the password. Click Skip tour.

    6. Navigate to Observe. Verify that the compreview-review1 project is selected. Change to the Logs tab. The filipmansur user has access to the application logs.

  13. Configure alerts.

    1. In the OpenShift web console, change to the non-private window. Navigate to AdministrationCluster Settings, click the Configuration tab, and then click Alertmanager.

    2. In the Alert routing tile, click Edit. Change the Group interval and Repeat inverval fields to 1m.

    3. In the Receivers tile, click Create Receiver. Create a receiver with the information in the following table:

      FieldValue
      Receiver name persistent
      Receiver type Webhook
      URL http://utility.lab.example.com:8000

      Specify a routing label with the alertname name and the Persistent.* value, and select the Regular expression checkbox.

      Then, click Create.

  14. Review that monitoring shows that the mariadb persistent volume claims are nearly full.

    1. In the OpenShift web console, navigate to ObserveAlerting. The console should list the PersistentVolumeUsageCritical and PersistentVolumeUsageNearFull alerts. Besides the original Etherpad deployment, you created a second deployment with a restore. Based on the disk usage in each deployment, the number and type of alerts can vary.

    2. Change to the terminal window and connect to the utility machine with SSH as the student user.

      [student@workstation compreview-review1]$ ssh utility
      ...output omitted...
    3. After approximately one minute after the receiver is created, the webhook debugger shows the alerts in the /home/student/persistent_alerts file.

      [student@utility ~]$ head persistent_alerts
      {'alerts': [{'annotations': {'description': 'PVC mariadb utilization has '
                                                  'crossed 75%. Free up some space '
                                                  'or expand the PVC.',
                                   'message': 'PVC mariadb is nearing full. Data '
                                              'deletion or PVC expansion is '
                                              'required.',
                                   'severity_level': 'warning',
                                   'storage_type': 'ceph'},
                   'endsAt': '0001-01-01T00:00:00Z',
                   'fingerprint': 'd051e03da5866d5b',
      ...output omitted...
    4. Disconnect from the utility machine.

      [student@utility ~]$ exit
      logout
      Connection to utility closed.
      [student@workstation compreview-review1]$
    5. Change to the home directory.

      [student@workstation compreview-review1]$ cd
      [student@workstation ~]$
  15. Expand the claims.

    1. Navigate to StoragePersistentVolumeClaims. Select All Projects from the project drop-down menu. Locate the two mariadb persistent volume claims.

    2. For each claim, click its name to view the details. Each claim has 190 MiB capacity and about 40 MiB available.

      For each claim, select Expand PVC from the Actions list. Edit the total size of the claim to 1900 MiB, and then click Expand.

    3. If you navigate to ObserveAlerting, then the alerts disappear after a few minutes.

Evaluation

As the student user on the workstation machine, use the lab command to grade your work. Correct any reported failures and rerun the command until successful.

[student@workstation ~]$ lab grade compreview-review1

Finish

As the student user on the workstation machine, use the lab command to complete this exercise. This step is important to ensure that resources from previous exercises do not impact upcoming exercises.

[student@workstation ~]$ lab finish compreview-review1
Revision: do380-4.14-397a507