Use Red Hat OpenShift GitOps for cluster administration.
Configure an OIDC identity provider.
Configure a one-time backup with OADP and restore from it.
Configure OpenShift Logging for short-term log retention and aggregation.
Configure alert forwarding and inspect alerts.
Outcomes
Configure Red Hat Single Sign-On (SSO) as an OIDC identity provider (IdP) for OpenShift by using Red Hat OpenShift GitOps.
Deploy the OpenShift Logging operator and configure it to use Loki as the log store, Vector as the collector, and the OpenShift web UI for log visualization.
Add permission for a user to read the logs from a project.
Back up an application and restore the application to a different namespace.
Configure alert forwarding.
Use monitoring to identify an application problem.
As the student user on the workstation machine, use the lab command to prepare your environment for this exercise, and to ensure that all required resources are available.
[student@workstation ~]$ lab start compreview-review1
Specifications
Use GitOps to configure the cluster to use Red Hat SSO for authentication.
Red Hat SSO provides the filipmansur user, with the redhat_sso password, in the ocp_rhsso client.
This user is part of the etherpad_devs group.
Use the following parameters to configure Red Hat SSO in OpenShift:
| Attribute | Value |
|---|---|
| Name |
RHSSO_OIDC
|
| Client ID |
ocp_rhsso
|
| Client secret |
QGEP6zoLo6BUGbib5oCkGwtZ8EAlmMgW
|
To use OpenShift GitOps, you can use the local admin user with the credentials from the openshift-gitops-cluster secret in the openshift-gitops namespace.
The operator adds a link to the default instance in the application menu of the OpenShift console.
The classroom has a GitLab instance that you can use to create any necessary Git repositories.
GitLab is available at the https://git.ocp4.example.com URL.
You can use the developer user with d3v3lop3r as the password.
The lab scripts expect a compreview-review1 repository for cleanup.
The lab scripts configure the username, email, and authentication for Git in the workstation machine.
Argo CD accesses only trusted repositories.
GitLab uses a certificate that is signed by the classroom CA.
This CA is included in the certificates that are trusted by the cluster.
You can use the config.openshift.io/inject-trusted-cabundle label to inject the cluster trusted certificates into a configuration map, and then configure Argo CD to trust the certificate.
The injected certificate is in the ca-bundle.crt file in the configuration map, and Argo CD uses the /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem for trusted certificates in the repository server container.
The ~/DO380/labs/compreview-review1/sso_config.yml file contains an incomplete authentication configuration.
The lab scripts deploy Etherpad in the compreview-review1 namespace.
The Etherpad resources have the app.kubernetes.io/name label with etherpad as the value, and the supporting database resources have the app.kubernetes.io/name label with mariadb as the value.
Create an etherpad-backup backup schedule.
The ~/DO380/labs/compreview-review1/schedule-db-backup.yml file contains an example to create the schedule.
You can define an alias to access the velero binary by using the following command:
[user@host ~]$ alias velero=\
'oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'Trigger an immediate backup from the schedule, and restore the backup to the etherpad-test-restore namespace.
Configure the deployed OpenShift Logging operator for short-term log retention and aggregation. The Loki operator is deployed in the cluster.
An S3 bucket is available for you, in the lab environment, to configure as log storage for Loki.
The bucket information and credentials are available in the ~/DO380/labs/compreview-review1/s3bucket.env file on the workstation machine.
Create a LokiStack instance with the 1x.demo size.
Use logging-loki as the name for the LokiStack resource.
You can find an incomplete example for the resource in the ~/DO380/labs/compreview-review1/logging/lokistack.yaml file.
Use Loki as the log store, Vector as the collector, and the OpenShift web UI for log visualization.
You can find an incomplete example for the resource in the ~/DO380/labs/compreview-review1/logging/clusterlogging.yaml file.
Configure the Logging operator to include the audit logs, by using the ClusterLogForwarder resource.
You can find an incomplete example for the resource in the ~/DO380/labs/compreview-review1/logging/forwarder.yaml file.
Users in the etherpad_devs group have read and write permissions on the compreview-review1 namespace.
Apply the necessary permissions so users in that group also have access to the application logs for that namespace.
Red Hat SSO provides the filipmansur user from the etherpad_devs group.
You can find an incomplete example for the role binding in the ~/DO380/labs/compreview-review1/logging/group-role.yaml file.
Configure monitoring to send alerts by using a webhook.
The lab scripts deploy a webhook debugger service to the utility machine.
This debugger starts a web server on port 8000 that prints the received payloads in the /home/student/persistent_alert file in the utility machine.
OpenShift can send webhooks to this debugger at the http://utility.lab.example.com:8000 URL.
This exercise generates alerts with alertname labels that start with the Persistent text.
You can use a regular expression filter so that the debugger prints only alerts for this exercise.
You can reduce the group and repeat intervals of the Alertmanager configuration to receive alerts sooner.
The debugger registers successful receipt of alerts for grading.
Both the original Etherpad deployment and the restore deployment trigger critical monitoring alerts. Review the two critical alerts that the Etherpad deployments fire, and solve them.
Install the OpenShift GitOps operator from OperatorHub.
Open a web browser and navigate to https://console-openshift-console.apps.ocp4.example.com
Click and log in as the admin user with redhatocp as the password.
Navigate to → .
Click , and then click .
Review the default configuration and click . The OLM can take a few minutes to install the operator. Click to navigate to the operator details.
Configure the default Argo CD instance to trust the classroom certificate to access repositories.
Argo CD accesses only trusted repositories.
You can use the config.openshift.io/inject-trusted-cabundle label to inject the classroom certificate into a configuration map, and then configure Argo CD to trust the certificate.
The injected certificate is in the ca-bundle.crt file in the configuration map, and Argo CD uses the /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem path for trusted certificates in the repository server container.
Use the terminal to log in to the OpenShift cluster as the admin user with redhatocp as the password.
[student@workstation ~]$ oc login -u admin -p redhatocp \
https://api.ocp4.example.com:6443
...output omitted...Create a cluster-root-ca-bundle configuration map in the openshift-gitops namespace.
[student@workstation ~]$ oc create configmap -n openshift-gitops \
cluster-root-ca-bundleAdd the config.openshift.io/inject-trusted-cabundle label to the configuration map with the true value.
OpenShift injects the cluster certificates into a configuration map with this label.
This bundle contains the signing certificate for the classroom GitLab instance.
[student@workstation ~]$ oc label configmap -n openshift-gitops \
cluster-root-ca-bundle config.openshift.io/inject-trusted-cabundle=true
configmap/cluster-root-ca-bundle labeledEdit the Argo CD default instance to inject the certificates.
You can use the following command to edit the resource:
[student@workstation ~]$ oc edit argocd -n openshift-gitops openshift-gitopsEdit the resource to mount the ca-bundle.crt file from the configuration map in the /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem path of the repository server container.
...output omitted... spec: ...output omitted... repo: resources: limits: cpu: "1" memory: 1Gi requests: cpu: 250m memory: 256MivolumeMounts:- mountPath: /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pemname: cluster-root-ca-bundlesubPath: ca-bundle.crtvolumes:- configMap:name: cluster-root-ca-bundlename: cluster-root-ca-bundleresourceExclusions: | ...output omitted...
Create a compreview-review1 public repository for the authentication configuration in the classroom GitLab at https://git.ocp4.example.com.
Use the developer GitLab user with d3v3lop3r as the password.
Open a web browser and navigate to https://git.ocp4.example.com.
Log in as the developer user with d3v3lop3r as the password.
Click , and then click .
Use compreview-review1 as the project slug (repository name), select the visibility level, and use the default values for all other fields.
Click .
Populate the repository with the OAuth CR file so OpenShift synchronizes users from the Red Hat SSO OIDC client.
Click , and then copy the https://git.ocp4.example.com/developer/compreview-review1.git URL.
In the terminal window, change to the ~/DO380/labs/compreview-review1/ directory.
[student@workstation ~]$ cd ~/DO380/labs/compreview-review1In a terminal, run the following command to clone the new repository.
[student@workstation compreview-review1]$ git clone \
https://git.ocp4.example.com/developer/compreview-review1.git
Cloning into 'compreview-review1'...
...output omitted...Change to the cloned repository directory.
[student@workstation compreview-review1]$ cd compreview-review1Create the rhsso-oidc-client-secret OpenShift secret for the Red Hat SSO client secret by using the client secret from Red Hat SSO parameters in the table.
[student@workstation compreview-review1]$ oc create secret generic \
rhsso-oidc-client-secret \
--from-literal clientSecret=QGEP6zoLo6BUGbib5oCkGwtZ8EAlmMgW \
-n openshift-config
secret/rhsso-oidc-client-secret createdCreate the OAuth CR YAML file.
You can find an example for the CR in the ~/DO380/labs/compreview-review1/sso_config.yml file.
The YAML file includes an LDAP IdP that you must preserve, because it provides the admin and developer users.
Do not remove the LDAP IdP, and add the OIDC IdP for Red Hat SSO.
apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster annotations: argocd.argoproj.io/sync-options: ServerSideApply=true,Validate=false spec: identityProviders: - ldap: ...output omitted... - openID: claims: email: - email name: - name preferredUsername: - preferred_username groups: - groups clientID:ocp_rhssoclientSecret: name:rhsso-oidc-client-secretextraScopes: [] issuer: >- https://sso.ocp4.example.com:8080/auth/realms/internal_devs mappingMethod: claim name:RHSSO_OIDCtype: OpenID
Copy the modified sso_config.yml file to the repository.
[student@workstation compreview-review1]$ cp ../sso_config.yml .Add the sso_config.yml file to the Git index.
[student@workstation compreview-review1]$ git add sso_config.ymlCommit the changes.
[student@workstation compreview-review1]$ git commit -m "Add SSO to OAuth"
[main f3c0ef1] Add SSO to OAuth
...output omitted...Push the changes to the repository.
The developer user is configured in the lab scripts as the default Git user.
[student@workstation compreview-review1]$ git push
...output omitted...Change to the ~/DO380/labs/compreview-review1 directory.
[student@workstation compreview-review1]$ cd ..Log in to the default Argo CD instance with the default user.
Extract the password for the local admin user.
In the web console, navigate to → , search for the openshift-gitops-cluster secret in all namespaces, and then click in the detail page to view the password.
Alternatively, you can execute the oc extract -n openshift-gitops secret/openshift-gitops-cluster --to=- command in the terminal to view the password.
Open a separate tab and open the default Argo CD instance. You can use the application menu to access this URL, by clicking in the application menu, or use the https://openshift-gitops-server-openshift-gitops.apps.ocp4.example.com URL.
The browser displays a warning because Argo CD uses a self-signed certificate. Trust the certificate. Argo CD might take a few minutes before showing the login page.
Log in to Argo CD by using the admin user and the password from the previous step, and click instead of .
Create an Argo CD application with the repository and observe the results.
In the Argo CD browser tab, click .
Create an application with the information in the following table:
| Field | Value |
|---|---|
| Application Name |
sso-oauth
|
| Project Name |
default
|
| Repository URL |
https://git.ocp4.example.com/developer/compreview-review1.git
|
| Path |
.
|
| Cluster URL |
https://kubernetes.default.svc
|
Then, click .
Click to view the application.
Click to display the synchronization panel, and then click .
Argo CD starts synchronizing the application. After about one minute, the console shows the application as synchronized and healthy.
Change to the terminal window and verify the status for the OAuth pods. Wait for the OAuth pods to be redeployed. It can take a few minutes for OpenShift to redeploy the pods.
[student@workstation compreview-review1]$watch oc get pods \ -n openshift-authenticationEvery 2.0s: oc get pods -n openshift-authentication workstation: Thu Feb 1 06:11:52 2024 NAME READY STATUS RESTARTS AGE oauth-openshift-69d79b5598-85knt 1/1 Running 0 85s oauth-openshift-69d79b5598-q2nwk 1/1 Running 0 58s oauth-openshift-69d79b5598-sj2fj 1/1 Running 0 114s^C
Verify that you can log in to the cluster as the filipmansur user with redhat_sso as the password.
[student@workstation compreview-review1]$ oc login -u filipmansur -p redhat_sso
Login successful.
...output omitted...Log in to the OpenShift cluster as the admin user.
[student@workstation compreview-review1]$ oc login -u admin -p redhatocp
...output omitted...Create an etherpad-backup backup schedule of Etherpad.
The ~/DO380/labs/compreview-review1/schedule-db-backup.yml file contains an example to create the backup definition.
The backup must include resources with the app.kubernetes.io/name label with etherpad as the value, and the app label with mariadb as the value.
Edit the example to match the following text.
apiVersion: velero.io/v1 kind: Schedule metadata: name:etherpad-backupnamespace: openshift-adp spec: schedule: "0 7 * * 0" paused: true template: ttl: 360h0m0s includedNamespaces:- compreview-review1orLabelSelectors: - matchLabels: app.kubernetes.io/name: etherpad - matchLabels: app.kubernetes.io/name: mariadbincludedResources: - deployment - route - service - pvc - persistentvolume - secret - service - namespace hooks: ...output omitted...
Apply the configuration for the schedule resource.
[student@workstation compreview-review1]$ oc apply -f schedule-db-backup.yml
schedule.velero.io/etherpad-backup createdCreate an alias to access the velero binary from the Velero deployment in the openshift-adp namespace.
[student@workstation compreview-review1]$ alias velero='\
oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'Verify the status of the schedule with the velero command.
[student@workstation compreview-review1]$ velero get schedule
NAME STATUS ... PAUSED
etherpad-backup New ... trueTrigger an immediate backup from the scheduled backup, and restore it to the etherpad-test-restore project.
The backup and restore should complete without errors or warnings.
Use the velero command to start a backup by using the schedule definition from the previous step.
Note the name of the backup that the command creates, to use in the next step.
[student@workstation compreview-review1]$velero backup create \ --from-schedule etherpad-backupINFO[0000] No Schedule.template.metadata.labels set - using Schedule.labels for backup object backup=openshift-adp/etherpad-backup-20240131113134 labels="map[]" Creating backup from schedule, all other filters are ignored. Backup request "etherpad-backup-20240131113134" submitted successfully. Runvelero backup describe etherpad-backup-20240131113134orvelero backup logs etherpad-backup-20240131113134for more details.
The S3 object storage that is configured in the lab environment uses a custom certificate that is signed with the OpenShift service CA.
You must add the CA certificate to the velero backup describe --details and velero backup logs commands as follows:
[user@host]$ velero backup logs \
--cacert=/run/secrets/kubernetes.io/serviceaccount/service-ca.crt \
etherpad-backup-20231115113447Monitor the status of the backup and verify that the backup ends with the Completed status.
The backup process takes several minutes.
[student@workstation compreview-review1]$ velero get backupRestore the backup to the etherpad-test-restore namespace.
[student@workstation compreview-review1]$velero restore create etherpad-test \ --from-backup etherpad-backup-Restore request "etherpad-test" submitted successfully. Run20240129143859\ --namespace-mappings compreview-review1:etherpad-test-restorevelero restore describe etherpad-testorvelero restore logs etherpad-testfor more details.
Use the velero command to get the status of the restore.
Monitor the output to verify that the restore status is Completed.
The restore process takes several minutes.
[student@workstation compreview-review1]$ velero get restore
NAME ... STATUS ...
etherpad-test ... Completed ...Review the restored resources in the etherpad-test-restore project.
[student@workstation compreview-review1]$ oc get -n etherpad-test-restore \
pod,deployment,route
NAME READY STATUS RESTARTS AGE
pod/etherpad-58f64cfb7d-8qrws 1/1 Running 3 (44s ago) 84s
pod/mariadb-66dc48b5f7-svlst 1/1 Running 0 84s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/etherpad 1/1 1 1 84s
deployment.apps/mariadb 1/1 1 1 84s
NAME HOST/PORT ...
... etherpad-etherpad-test-restore.apps.ocp4.example.com ...The pod for the etherpad deployment requires more time to be ready, because the pod requires the database deployment to be ready first.
Visit the https://etherpad-etherpad-test-restore.apps.ocp4.example.com URL to verify that the restored Etherpad works.
Use the bucket credentials to create the logging-loki-odf secret in the openshift-logging namespace.
Create a LokiStack resource YAML file for an instance called logging-loki in the openshift-logging namespace.
Create a ClusterLogging resource YAML file by using the loki log store, the vector collector, and the ocp-console visualization type.
Use the ~/DO380/labs/compreview-review1/s3bucket.env environment file to create the logging-loki-odf secret in the openshift-logging namespace.
[student@workstation compreview-review1]$ oc create secret generic \
logging-loki-odf -n openshift-logging --from-env-file=s3bucket.env
secret/logging-loki-odf createdCreate a LokiStack resource YAML file for an instance called logging-loki in the openshift-logging namespace.
This instance uses the bucket as the storage.
You can find an incomplete example for the resource in the ~/DO380/labs/compreview-review1/logging/lokistack.yaml file.
apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name:logging-lokinamespace: openshift-logging spec: size: 1x.demo storage: secret: name:logging-loki-odftype: s3 tls: caName: openshift-service-ca.crt storageClassName: ocs-external-storagecluster-ceph-rbd tenants: mode: openshift-logging
Create the LokiStack resource.
[student@workstation compreview-review1]$ oc create -f logging/lokistack.yaml
lokistack.loki.grafana.com/logging-loki createdCreate a ClusterLogging resource YAML file by using the loki log store, the vector collector, and the ocp-console visualization type.
You can find an incomplete example for the resource in the ~/DO380/labs/compreview-review1/logging/clusterlogging.yaml file.
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
name: instance
namespace: openshift-logging
spec:
managementState: Managed
logStore:
type: lokistack
lokistack:
name: logging-loki
collection:
type: vector
visualization:
type: ocp-consoleCreate the ClusterLogging resource.
[student@workstation compreview-review1]$ oc create -f logging/clusterlogging.yaml
clusterlogging.logging.openshift.io/instance createdVerify that the ClusterLogging and LokiStack pods are up and running.
[student@workstation compreview-review1]$ oc get pods -n openshift-logging
NAME READY STATUS RESTARTS AGE
cluster-logging-operator-554849f7dd-75hjk 1/1 Running 0 6m7s
collector-2dqsh 1/1 Running 0 18s
collector-5fp2q 1/1 Running 0 19s
collector-c6zjv 1/1 Running 0 19s
collector-gggtc 1/1 Running 0 13s
collector-k5zs8 1/1 Running 0 13s
collector-wgz7q 1/1 Running 0 12s
logging-loki-compactor-0 1/1 Running 0 83s
logging-loki-distributor-c55478c4c-9kt2k 1/1 Running 0 83s
logging-loki-gateway-75d6fccb68-krk49 2/2 Running 0 82s
logging-loki-gateway-75d6fccb68-vghdg 2/2 Running 0 82s
logging-loki-index-gateway-0 1/1 Running 0 82s
logging-loki-ingester-0 1/1 Running 0 83s
logging-loki-querier-6f7d8b7564-4glpp 1/1 Running 0 83s
logging-loki-query-frontend-678dddf864-947hn 1/1 Running 0 83s
logging-view-plugin-5b9b5b7bdc-zrp9t 1/1 Running 0 35sEnable the console plug-in for the OpenShift Logging operator. Verify that you have access to the logs.
Change to the web console browser. Click → , and select from the drop-down menu.
Click , click , select , and click .
Reload the web console, and navigate to → .
If the → menu is not available, then wait until the web console shows the Web console update is available message and reload the web console.
You have access to logs for the application and infrastructure resources.
By default, the ClusterLogging instance includes logs for the application and infrastructure, but not the audit logs.
Observe the application logs, which are selected by default.
Include the audit logs by creating a log forwarder for the application, infrastructure, and audit logs to the LokiStack resource.
Change to the terminal window, and create an ClusterLogForwarder resource YAML file for a log forwarder called instance in the openshift-logging namespace.
The log forwarder must forward the application, infrastructure, and audit logs.
You can find an incomplete example for the resource in the ~/DO380/labs/compreview-review1/logging/forwarder.yaml file.
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: instance
namespace: openshift-logging
spec:
pipelines:
- name: all-to-default
inputRefs:
- infrastructure
- application
- audit
outputRefs:
- defaultCreate the ClusterLogForwarder resource.
[student@workstation compreview-review1]$ oc create -f logging/forwarder.yaml
clusterlogforwarder.logging.openshift.io/instance createdChange to the web console browser and reload it. You have access to the audit logs.
Give the filipmansur user permission to view the logs in the compreview-review1 project, and verify that the user has access to the logs.
Give the permission to the filipmansur user by assigning the cluster-logging-application-view cluster role through the etherpad-devs group.
Change to the terminal window.
Review the required role to provide access to the application logs to the filipmansur user.
You can find an example in the ~/DO380/labs/compreview-review1/logging/group-role.yaml file.
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view-application-logs namespace: compreview-review1 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-logging-application-view subjects: - kind:Groupname:etherpad-devsapiGroup: rbac.authorization.k8s.io
Apply the role to the filipmansur user.
[student@workstation compreview-review1]$ oc create -f \
logging/group-role.yaml
rolebinding.rbac.authorization.k8s.io/view-application-logs createdChange to the browser window, open a private window, and navigate to https://console-openshift-console.apps.ocp4.example.com
Click and log in as the filipmansur user with redhat_sso as the password.
Click .
Navigate to .
Verify that the project is selected.
Change to the tab.
The filipmansur user has access to the application logs.
Configure alerts.
In the OpenShift web console, change to the non-private window. Navigate to → , click the tab, and then click .
In the tile, click .
Change the and fields to 1m.
In the tile, click . Create a receiver with the information in the following table:
| Field | Value |
|---|---|
| Receiver name |
persistent
|
| Receiver type | |
| URL |
http://utility.lab.example.com:8000
|
Specify a routing label with the alertname name and the Persistent.* value, and select the checkbox.
Then, click .
Review that monitoring shows that the mariadb persistent volume claims are nearly full.
In the OpenShift web console, navigate to → . The console should list the and alerts. Besides the original Etherpad deployment, you created a second deployment with a restore. Based on the disk usage in each deployment, the number and type of alerts can vary.
Change to the terminal window and connect to the utility machine with SSH as the student user.
[student@workstation compreview-review1]$ ssh utility
...output omitted...After approximately one minute after the receiver is created, the webhook debugger shows the alerts in the /home/student/persistent_alerts file.
[student@utility ~]$ head persistent_alerts
{'alerts': [{'annotations': {'description': 'PVC mariadb utilization has '
'crossed 75%. Free up some space '
'or expand the PVC.',
'message': 'PVC mariadb is nearing full. Data '
'deletion or PVC expansion is '
'required.',
'severity_level': 'warning',
'storage_type': 'ceph'},
'endsAt': '0001-01-01T00:00:00Z',
'fingerprint': 'd051e03da5866d5b',
...output omitted...Disconnect from the utility machine.
[student@utility ~]$ exit
logout
Connection to utility closed.
[student@workstation compreview-review1]$Change to the home directory.
[student@workstation compreview-review1]$ cd
[student@workstation ~]$Expand the claims.
Navigate to → .
Select from the project drop-down menu.
Locate the two mariadb persistent volume claims.
For each claim, click its name to view the details. Each claim has 190 MiB capacity and about 40 MiB available.
For each claim, select from the list. Edit the total size of the claim to 1900 MiB, and then click .
If you navigate to → , then the alerts disappear after a few minutes.