Bookmark this page

Guided Exercise: Centralized Logging

Deploy OpenShift Logging for short-term log retention and aggregation.

Outcomes

  • Use Loki as the log store for OpenShift Logging.

  • Use Vector as the collector and the OpenShift web UI for log visualization.

As the student user on the workstation machine, use the lab command to prepare your environment for this exercise, and to ensure that all required resources are available.

[student@workstation ~]$ lab start logging-central

Instructions

Your company requires you to configure the deployed OpenShift Logging operator for short-term log retention and aggregation. You must configure OpenShift Logging by using Vector, Loki, and the OpenShift web console. Moreover, you must apply the cluster-logging-application-view cluster role to the developer user, so this user can retrieve application logs for the testing-logs project.

  1. As the OpenShift admin user, install the Loki operator in the openshift-operators-redhat namespace.

    1. Open a web browser and navigate to https://console-openshift-console.apps.ocp4.example.com.

    2. Click Red Hat Identity Management and log in as the admin user with redhatocp as the password.

    3. Navigate to OperatorsOperatorHub.

    4. Click Loki Operator, and then click Install.

    5. Select Enable Operator recommended cluster monitoring on this Namespace and click Install. The Operator Lifecycle Manager (OLM) can take a few minutes to install the operator. Click View Operator to navigate to the operator details.

  2. Create an S3-compatible object storage bucket with OpenShift Data Foundation for the Loki operator. Retrieve the credentials for the bucket, and create a secret with those credentials.

    1. Change to the terminal window, and change to the ~/DO380/labs/logging-central/ directory.

      [student@workstation ~]$ cd ~/DO380/labs/logging-central/
    2. Create an ObjectBucketClaim resource YAML file for a bucket called loki-bucket-odf in the openshift-logging namespace. The Loki operator uses this bucket. You can find an incomplete example for the resource in the ~/DO380/labs/logging-central/objectbucket.yaml file.

      apiVersion: objectbucket.io/v1alpha1
      kind: ObjectBucketClaim
      metadata:
        name: loki-bucket-odf
        namespace: openshift-logging
      spec:
        generateBucketName: loki-bucket-odf
        storageClassName: openshift-storage.noobaa.io
    3. Create the ObjectBucketClaim resource.

      [student@workstation logging-central]$ oc create -f objectbucket.yaml
      objectbucketclaim.objectbucket.io/loki-bucket-odf created
    4. Verify that the object bucket claim is created and in the Bound phase.

      [student@workstation logging-central]$ oc get obc -n openshift-logging
      NAME              STORAGE-CLASS                 PHASE   AGE
      loki-bucket-odf   openshift-storage.noobaa.io   Bound   12m
    5. Retrieve the S3 bucket information and credentials, and store them in environment variables. When an object bucket claim is created, OpenShift Data Foundation creates a secret and a configuration map with the same name as for the bucket information and credentials. The bucket credentials would differ on your system.

      [student@workstation logging-central]$ BUCKET_HOST=$(oc get -n openshift-logging \
        configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_HOST}'); \
        BUCKET_NAME=$(oc get -n openshift-logging configmap loki-bucket-odf \
        -o jsonpath='{.data.BUCKET_NAME}'); \
        BUCKET_PORT=$(oc get -n openshift-logging configmap loki-bucket-odf \
        -o jsonpath='{.data.BUCKET_PORT}'); \
        ACCESS_KEY_ID=$(oc get -n openshift-logging secret loki-bucket-odf \
        -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 -d); \
        SECRET_ACCESS_KEY=$(oc get -n openshift-logging secret loki-bucket-odf \
        -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 -d)
    6. Create a secret called logging-loki-odf in the openshift-logging namespace with the bucket credentials.

      [student@workstation logging-central]$ oc create secret generic logging-loki-odf \
        -n openshift-logging \
        --from-literal=access_key_id=${ACCESS_KEY_ID} \
        --from-literal=access_key_secret=${SECRET_ACCESS_KEY} \
        --from-literal=bucketnames=${BUCKET_NAME} \
        --from-literal=endpoint=https://${BUCKET_HOST}:${BUCKET_PORT}
      secret/logging-loki-odf created
  3. Create a LokiStack instance by using the bucket as the storage.

    1. Create a LokiStack resource YAML file for an instance called logging-loki in the openshift-logging namespace. This instance uses the bucket as the storage. You can find an incomplete example for the resource in the ~/DO380/labs/logging-central/lokistack.yaml file.

      apiVersion: loki.grafana.com/v1
      kind: LokiStack
      metadata:
        name: logging-loki
        namespace: openshift-logging
      spec:
        size: 1x.demo
        storage:
          secret:
            name: logging-loki-odf
            type: s3
          tls:
            caName: openshift-service-ca.crt
        storageClassName: ocs-external-storagecluster-ceph-rbd
        tenants:
          mode: openshift-logging
    2. Create the LokiStack resource.

      [student@workstation logging-central]$ oc create -f lokistack.yaml
      lokistack.loki.grafana.com/logging-loki created
    3. Verify that the LokiStack pods are up and running.

      [student@workstation logging-central]$ oc get pods -n openshift-logging
      NAME                                           READY   STATUS    RESTARTS   AGE
      cluster-logging-operator-554849f7dd-9tcz2      1/1     Running   0          20m
      logging-loki-compactor-0                       1/1     Running   0          98s
      logging-loki-distributor-64c798c4c5-6wpxp      1/1     Running   0          99s
      logging-loki-gateway-68fb59cdf5-6b8mt          2/2     Running   0          98s
      logging-loki-gateway-68fb59cdf5-7qf4s          2/2     Running   0          98s
      logging-loki-index-gateway-0                   1/1     Running   0          98s
      logging-loki-ingester-0                        1/1     Running   0          99s
      logging-loki-querier-577b55f8d5-f4cfb          1/1     Running   0          98s
      logging-loki-query-frontend-775755684d-f94bd   1/1     Running   0          98s
  4. Create a ClusterLogging instance by using loki as the log store, vector as the collector, and the ocp-console as the visualization console.

    1. Create a ClusterLogging resource YAML file by using loki as the log store, vector as the collector, and the ocp-console as the visualization console. You can find an incomplete example for the resource in the ~/DO380/labs/logging-central/clusterlogging.yaml file.

      apiVersion: logging.openshift.io/v1
      kind: ClusterLogging
      metadata:
        name: instance
        namespace: openshift-logging
      spec:
        managementState: Managed
        logStore:
          type: lokistack
          lokistack:
            name: logging-loki
        collection:
          type: vector
        visualization:
          type: ocp-console
    2. Create the ClusterLogging resource.

      [student@workstation logging-central]$ oc create -f clusterlogging.yaml
      clusterlogging.logging.openshift.io/instance created
    3. Verify that the ClusterLogging pods are up and running.

      [student@workstation logging-central]$ oc get pods -n openshift-logging
      NAME                                           READY   STATUS    RESTARTS   AGE
      cluster-logging-operator-554849f7dd-9tcz2      1/1     Running   0          24m
      collector-7rb2n                                1/1     Running   0          93s
      collector-bkj8x                                1/1     Running   0          93s
      collector-cng5z                                1/1     Running   0          93s
      collector-hx2gd                                1/1     Running   0          93s
      collector-x92zq                                1/1     Running   0          93s
      collector-xlqw9                                1/1     Running   0          93s
      ...output omitted...
      logging-view-plugin-5b9b5b7bdc-tvkqk           1/1     Running   0          94s
  5. Enable the console plug-in for the OpenShift Logging operator. Verify that you have access to the logs.

    1. Change to the web console browser. Click OperatorsInstalled Operators, and select All Projects from the drop-down menu.

    2. Click Red Hat OpenShift Logging, click Console plugin, select Enable, and click Save.

    3. Reload the web console, and navigate to ObserveLogs. If the ObserveLogs menu is not available, then wait until the web console shows the Web console update is available message and reload the web console. You have access to the logs for the application and infrastructure resources. By default, the ClusterLogging instance includes logs for the application and infrastructure, but not the audit logs. Observe the application logs, which are selected by default.

    4. From the drop-down menu, select the infrastructure logs and observe them.

    5. From the drop-down menu, select the audit logs and observe the No datapoints found message. You receive this message because the ClusterLogging instance does not forward the audit logs.

  6. Include the audit logs by creating a log forwarder for the application, infrastructure, and audit logs to the LokiStack resource.

    1. Change to the terminal window, and create an ClusterLogForwarder resource YAML file for a log forwarder called instance in the openshift-logging namespace. The log forwarder must forward the application, infrastructure, and audit logs. You can find an incomplete example for the resource in the ~/DO380/labs/logging-central/forwarder.yaml file.

      apiVersion: logging.openshift.io/v1
      kind: ClusterLogForwarder
      metadata:
        name: instance
        namespace: openshift-logging
      spec:
        pipelines:
        - name: all-to-default
          inputRefs:
          - infrastructure
          - application
          - audit
          outputRefs:
          - default
    2. Create the ClusterLogForwarder resource.

      [student@workstation logging-central]$ oc create -f forwarder.yaml
      clusterlogforwarder.logging.openshift.io/instance created
    3. Change to the web console browser and reload it. You have access to the audit logs.

  7. Verify that the infrastructure and audit logs are sent to Loki, by trying to open an SSH connection to one of the compute nodes.

    1. Change to the terminal window. Try to open an SSH connection to the worker01.ocp4.example.com node, which is rejected.

      [student@workstation logging-central]$ ssh worker01.ocp4.example.com
      Warning: Permanently added 'worker01.ocp4.example.com' (ED25519) to the list of known hosts.
      student@worker01.ocp4.example.com: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
    2. Change to the web console browser, and click the refresh button in the upper-right corner to update the logs.

    3. Ensure that the audit logs are selected in the drop-down menu. Search for SSH connection messages by typing ssh in the Search by Content field and clicking Run Query.

    4. Unfold the information for the first audit log, which shows an attempt to log to the worker01 node with a failed result.

    5. From the drop-down menu, select the infrastructure logs. Click Clear all filters and click Show Query.

    6. Modify the query to filter logs only from the sshd service. The query should read as follows:

      { log_type="infrastructure" } | json | systemd_u_SYSLOG_IDENTIFIER="sshd"
    7. Click Run Query and verify that you receive logs from the sshd service.

  8. Create a project and a pod as the developer user and verify that this user has access to the pod logs.

    1. Change to the terminal window and log in to the OpenShift cluster as the developer user with developer as the password.

      [student@workstation logging-central]$ oc login -u developer -p developer
      Login successful.
      ...output omitted...
    2. Create the testing-logs project.

      [student@workstation logging-central]$ oc new-project testing-logs
      Now using project "testing-logs" on server "https://api.ocp4.example.com:6443".
      ...output omitted...
    3. Create a test-date pod.

      [student@workstation logging-central]$ oc run test-date --restart 'Never' \
        --image registry.ocp4.example.com:8443/ubi9/ubi -- date
      pod/test-date created
    4. Verify that the pod is in the Completed status.

      [student@workstation logging-central]$ oc get pods
      NAME        READY   STATUS      RESTARTS   AGE
      test-date   0/1     Completed   0          56s
    5. Review the logs for the pod by using the terminal.

      [student@workstation logging-central]$ oc logs test-date
      Tue Jan 16 11:15:15 UTC 2024
    6. Delete the pod.

      [student@workstation logging-central]$ oc delete pod test-date
      pod "test-date" deleted
    7. Try to retrieve the logs for the pod, and verify that the developer user no longer has access to the pod logs.

      [student@workstation logging-central]$ oc logs test-date
      Error from server (NotFound): pods "test-date" not found
  9. Log in as the developer user and verify that the user has no access to the logs.

    1. Change to the browser window, open a private window, and navigate to https://console-openshift-console.apps.ocp4.example.com

    2. Click Red Hat Identity Management and log in as the developer user with developer as the password. Click Skip tour.

    3. Navigate to Observe. Verify that the testing-logs project is selected. Change to the Logs tab. The developer user has no permissions to retrieve the pod logs.

  10. Review the test-date pod logs by using the OpenShift logging operator. The admin user has access to the stored pod logs.

    1. Change to the web console browser where the admin user is logged in. From the drop-down menu where the infrastructure logs are currently selected, select the application logs. Click the refresh button in the upper-right corner to update the logs.

    2. Modify the query to filter the results for the testing-logs namespace. The query should read as follows:

      { log_type="application", kubernetes_namespace_name="testing-logs" } | json
    3. Click Run Query. Retrieve the information for the only entry, which shows the logs from the deleted pod.

  11. Give the developer user permission to view the logs in the test-logs project, and verify that the user has access to the logs. Give the permission to the developer user by assigning them the cluster-logging-application-view cluster role.

    1. Change to the terminal window and log in to the OpenShift cluster as the admin user with redhatocp as the password.

      [student@workstation logging-central]$ oc login -u admin -p redhatocp
      Login successful.
      ...output omitted...
    2. Review the required role to provide access to the application logs to the developer user. You can find an example in the ~/DO380/labs/logging-central/developer-role.yaml file.

      apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding
      metadata:
        name: view-application-logs
        namespace: testing-logs
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: cluster-logging-application-view
      subjects:
      - kind: User
        name: developer
        apiGroup: rbac.authorization.k8s.io
    3. Apply the role to the developer user.

      [student@workstation logging-central]$ oc create -f developer-role.yaml
      rolebinding.rbac.authorization.k8s.io/view-application-logs created
    4. Change to the web console in the private browser where the developer user is logged in, and refresh it.

    5. Verify that the developer user has access to the deleted pod logs.

    6. Close both the web browser windows and change to the /home/student directory in the terminal window.

      [student@workstation logging-central]$ cd

Finish

On the workstation machine, use the lab command to complete this exercise. This step is important to ensure that resources from previous exercises do not impact upcoming exercises.

[student@workstation ~]$ lab finish logging-central

Revision: do380-4.14-397a507