Deploy OpenShift Logging for short-term log retention and aggregation.
Outcomes
Use Loki as the log store for OpenShift Logging.
Use Vector as the collector and the OpenShift web UI for log visualization.
As the student user on the workstation machine, use the lab command to prepare your environment for this exercise, and to ensure that all required resources are available.
[student@workstation ~]$ lab start logging-central
Instructions
Your company requires you to configure the deployed OpenShift Logging operator for short-term log retention and aggregation.
You must configure OpenShift Logging by using Vector, Loki, and the OpenShift web console.
Moreover, you must apply the cluster-logging-application-view cluster role to the developer user, so this user can retrieve application logs for the testing-logs project.
As the OpenShift admin user, install the Loki operator in the openshift-operators-redhat namespace.
Open a web browser and navigate to https://console-openshift-console.apps.ocp4.example.com.
Click and log in as the admin user with redhatocp as the password.
Navigate to → .
Click , and then click .
Select and click . The Operator Lifecycle Manager (OLM) can take a few minutes to install the operator. Click to navigate to the operator details.
Create an S3-compatible object storage bucket with OpenShift Data Foundation for the Loki operator. Retrieve the credentials for the bucket, and create a secret with those credentials.
Change to the terminal window, and change to the ~/DO380/labs/logging-central/ directory.
[student@workstation ~]$ cd ~/DO380/labs/logging-central/Create an ObjectBucketClaim resource YAML file for a bucket called loki-bucket-odf in the openshift-logging namespace.
The Loki operator uses this bucket.
You can find an incomplete example for the resource in the ~/DO380/labs/logging-central/objectbucket.yaml file.
apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name:loki-bucket-odfnamespace:openshift-loggingspec: generateBucketName:loki-bucket-odfstorageClassName: openshift-storage.noobaa.io
Create the ObjectBucketClaim resource.
[student@workstation logging-central]$ oc create -f objectbucket.yaml
objectbucketclaim.objectbucket.io/loki-bucket-odf createdVerify that the object bucket claim is created and in the Bound phase.
[student@workstation logging-central]$ oc get obc -n openshift-logging
NAME STORAGE-CLASS PHASE AGE
loki-bucket-odf openshift-storage.noobaa.io Bound 12mRetrieve the S3 bucket information and credentials, and store them in environment variables. When an object bucket claim is created, OpenShift Data Foundation creates a secret and a configuration map with the same name as for the bucket information and credentials. The bucket credentials would differ on your system.
[student@workstation logging-central]$ BUCKET_HOST=$(oc get -n openshift-logging \
configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_HOST}'); \
BUCKET_NAME=$(oc get -n openshift-logging configmap loki-bucket-odf \
-o jsonpath='{.data.BUCKET_NAME}'); \
BUCKET_PORT=$(oc get -n openshift-logging configmap loki-bucket-odf \
-o jsonpath='{.data.BUCKET_PORT}'); \
ACCESS_KEY_ID=$(oc get -n openshift-logging secret loki-bucket-odf \
-o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 -d); \
SECRET_ACCESS_KEY=$(oc get -n openshift-logging secret loki-bucket-odf \
-o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 -d)Create a secret called logging-loki-odf in the openshift-logging namespace with the bucket credentials.
[student@workstation logging-central]$ oc create secret generic logging-loki-odf \
-n openshift-logging \
--from-literal=access_key_id=${ACCESS_KEY_ID} \
--from-literal=access_key_secret=${SECRET_ACCESS_KEY} \
--from-literal=bucketnames=${BUCKET_NAME} \
--from-literal=endpoint=https://${BUCKET_HOST}:${BUCKET_PORT}
secret/logging-loki-odf createdCreate a LokiStack instance by using the bucket as the storage.
Create a LokiStack resource YAML file for an instance called logging-loki in the openshift-logging namespace.
This instance uses the bucket as the storage.
You can find an incomplete example for the resource in the ~/DO380/labs/logging-central/lokistack.yaml file.
apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name:logging-lokinamespace: openshift-logging spec: size: 1x.demo storage: secret: name:logging-loki-odftype: s3 tls: caName: openshift-service-ca.crt storageClassName: ocs-external-storagecluster-ceph-rbd tenants: mode: openshift-logging
Create the LokiStack resource.
[student@workstation logging-central]$ oc create -f lokistack.yaml
lokistack.loki.grafana.com/logging-loki createdVerify that the LokiStack pods are up and running.
[student@workstation logging-central]$ oc get pods -n openshift-logging
NAME READY STATUS RESTARTS AGE
cluster-logging-operator-554849f7dd-9tcz2 1/1 Running 0 20m
logging-loki-compactor-0 1/1 Running 0 98s
logging-loki-distributor-64c798c4c5-6wpxp 1/1 Running 0 99s
logging-loki-gateway-68fb59cdf5-6b8mt 2/2 Running 0 98s
logging-loki-gateway-68fb59cdf5-7qf4s 2/2 Running 0 98s
logging-loki-index-gateway-0 1/1 Running 0 98s
logging-loki-ingester-0 1/1 Running 0 99s
logging-loki-querier-577b55f8d5-f4cfb 1/1 Running 0 98s
logging-loki-query-frontend-775755684d-f94bd 1/1 Running 0 98sCreate a ClusterLogging instance by using loki as the log store, vector as the collector, and the ocp-console as the visualization console.
Create a ClusterLogging resource YAML file by using loki as the log store, vector as the collector, and the ocp-console as the visualization console.
You can find an incomplete example for the resource in the ~/DO380/labs/logging-central/clusterlogging.yaml file.
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
name: instance
namespace: openshift-logging
spec:
managementState: Managed
logStore:
type: lokistack
lokistack:
name: logging-loki
collection:
type: vector
visualization:
type: ocp-consoleCreate the ClusterLogging resource.
[student@workstation logging-central]$ oc create -f clusterlogging.yaml
clusterlogging.logging.openshift.io/instance createdVerify that the ClusterLogging pods are up and running.
[student@workstation logging-central]$ oc get pods -n openshift-logging
NAME READY STATUS RESTARTS AGE
cluster-logging-operator-554849f7dd-9tcz2 1/1 Running 0 24m
collector-7rb2n 1/1 Running 0 93s
collector-bkj8x 1/1 Running 0 93s
collector-cng5z 1/1 Running 0 93s
collector-hx2gd 1/1 Running 0 93s
collector-x92zq 1/1 Running 0 93s
collector-xlqw9 1/1 Running 0 93s
...output omitted...
logging-view-plugin-5b9b5b7bdc-tvkqk 1/1 Running 0 94sEnable the console plug-in for the OpenShift Logging operator. Verify that you have access to the logs.
Change to the web console browser. Click → , and select from the drop-down menu.
Click , click , select , and click .
Reload the web console, and navigate to → .
If the → menu is not available, then wait until the web console shows the Web console update is available message and reload the web console.
You have access to the logs for the application and infrastructure resources.
By default, the ClusterLogging instance includes logs for the application and infrastructure, but not the audit logs.
Observe the application logs, which are selected by default.
From the drop-down menu, select the infrastructure logs and observe them.

From the drop-down menu, select the audit logs and observe the No datapoints found message.
You receive this message because the ClusterLogging instance does not forward the audit logs.
Include the audit logs by creating a log forwarder for the application, infrastructure, and audit logs to the LokiStack resource.
Change to the terminal window, and create an ClusterLogForwarder resource YAML file for a log forwarder called instance in the openshift-logging namespace.
The log forwarder must forward the application, infrastructure, and audit logs.
You can find an incomplete example for the resource in the ~/DO380/labs/logging-central/forwarder.yaml file.
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: instance
namespace: openshift-logging
spec:
pipelines:
- name: all-to-default
inputRefs:
- infrastructure
- application
- audit
outputRefs:
- defaultCreate the ClusterLogForwarder resource.
[student@workstation logging-central]$ oc create -f forwarder.yaml
clusterlogforwarder.logging.openshift.io/instance createdChange to the web console browser and reload it. You have access to the audit logs.
Verify that the infrastructure and audit logs are sent to Loki, by trying to open an SSH connection to one of the compute nodes.
Change to the terminal window.
Try to open an SSH connection to the worker01.ocp4.example.com node, which is rejected.
[student@workstation logging-central]$ ssh worker01.ocp4.example.com
Warning: Permanently added 'worker01.ocp4.example.com' (ED25519) to the list of known hosts.
student@worker01.ocp4.example.com: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).Change to the web console browser, and click the refresh button in the upper-right corner to update the logs.
Ensure that the audit logs are selected in the drop-down menu.
Search for SSH connection messages by typing ssh in the Search by Content field and clicking .
Unfold the information for the first audit log, which shows an attempt to log to the worker01 node with a failed result.
From the drop-down menu, select the infrastructure logs. Click and click .
Modify the query to filter logs only from the sshd service.
The query should read as follows:
{ log_type="infrastructure" } | json | systemd_u_SYSLOG_IDENTIFIER="sshd"Click and verify that you receive logs from the sshd service.
Create a project and a pod as the developer user and verify that this user has access to the pod logs.
Change to the terminal window and log in to the OpenShift cluster as the developer user with developer as the password.
[student@workstation logging-central]$ oc login -u developer -p developer
Login successful.
...output omitted...Create the testing-logs project.
[student@workstation logging-central]$ oc new-project testing-logs
Now using project "testing-logs" on server "https://api.ocp4.example.com:6443".
...output omitted...Create a test-date pod.
[student@workstation logging-central]$ oc run test-date --restart 'Never' \
--image registry.ocp4.example.com:8443/ubi9/ubi -- date
pod/test-date createdVerify that the pod is in the Completed status.
[student@workstation logging-central]$ oc get pods
NAME READY STATUS RESTARTS AGE
test-date 0/1 Completed 0 56sReview the logs for the pod by using the terminal.
[student@workstation logging-central]$ oc logs test-date
Tue Jan 16 11:15:15 UTC 2024Delete the pod.
[student@workstation logging-central]$ oc delete pod test-date
pod "test-date" deletedTry to retrieve the logs for the pod, and verify that the developer user no longer has access to the pod logs.
[student@workstation logging-central]$ oc logs test-date
Error from server (NotFound): pods "test-date" not foundLog in as the developer user and verify that the user has no access to the logs.
Change to the browser window, open a private window, and navigate to https://console-openshift-console.apps.ocp4.example.com
Click and log in as the developer user with developer as the password.
Click .
Navigate to .
Verify that the project is selected.
Change to the tab.
The developer user has no permissions to retrieve the pod logs.

Review the test-date pod logs by using the OpenShift logging operator.
The admin user has access to the stored pod logs.
Change to the web console browser where the admin user is logged in.
From the drop-down menu where the infrastructure logs are currently selected, select the application logs.
Click the refresh button in the upper-right corner to update the logs.
Modify the query to filter the results for the testing-logs namespace.
The query should read as follows:
{ log_type="application", kubernetes_namespace_name="testing-logs" } | jsonClick . Retrieve the information for the only entry, which shows the logs from the deleted pod.
Give the developer user permission to view the logs in the test-logs project, and verify that the user has access to the logs.
Give the permission to the developer user by assigning them the cluster-logging-application-view cluster role.
Change to the terminal window and log in to the OpenShift cluster as the admin user with redhatocp as the password.
[student@workstation logging-central]$ oc login -u admin -p redhatocp
Login successful.
...output omitted...Review the required role to provide access to the application logs to the developer user.
You can find an example in the ~/DO380/labs/logging-central/developer-role.yaml file.
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view-application-logs namespace: testing-logs roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-logging-application-view subjects: - kind: User name: developer apiGroup: rbac.authorization.k8s.io
Apply the role to the developer user.
[student@workstation logging-central]$ oc create -f developer-role.yaml
rolebinding.rbac.authorization.k8s.io/view-application-logs createdChange to the web console in the private browser where the developer user is logged in, and refresh it.
Verify that the developer user has access to the deleted pod logs.
Close both the web browser windows and change to the /home/student directory in the terminal window.
[student@workstation logging-central]$ cd