Bookmark this page

Lab: Managing Red Hat OpenShift Deployments

Configure and manage application deployments and stateful applications on Red Hat OpenShift.

Configure and manage application health monitoring on Red Hat OpenShift.

Outcomes

  • Deploy applications to Red Hat OpenShift.

  • Configure persistence for deployments.

  • Inject configuration maps and secrets to deployments.

  • Configure liveness and readiness checks.

As the student user on the workstation machine, use the lab command to prepare your environment for this exercise, and to ensure that all required resources are available.

[student@workstation ~]$ lab start deployments-review

Instructions

The lab script deploys an ephemeral PostgreSQL database that uses a secret to externalize the user, password, and database parameters.

In this lab, you are asked to make the PostgreSQL database persistent. Then, deploy the expense-service application and connect it the PostgreSQL database.

Finally, you are asked to configure liveness and readiness probes for the expense-service application deployment.

  1. Log in to Red Hat OpenShift.

    1. Log in to OpenShift as the developer user.

      [student@workstation ~]$ oc login -u developer -p developer \
      https://api.ocp4.example.com:6443
      Login successful.
      ...output omitted...
    2. Ensure that you use the deployments-review project.

      [student@workstation ~]$ oc project deployments-review
      Already on project "deployments-review" on server "https://api.ocp4.example.com:6443".
  2. Create a PersistentVolumeClaim object called postgres-pvc that uses the nfs-storage storage class. Set the storage size to 1Gi and use the ReadWriteOnce access mode.

    1. Create the pvc.yaml file with the following contents:

      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: postgres-pvc
      spec:
        storageClassName: nfs-storage
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
    2. Create the persistent volume claim.

      [student@workstation ~]$ oc create -f pvc.yaml
      persistentvolumeclaim/postgres-pvc created
    3. Verify that the persistent volume claim is bound and ready to be used.

      [student@workstation ~]$ oc describe pvc postgres-pvc
      Name:          postgres-pvc
      Namespace:     deployments-review
      StorageClass:  nfs-storage
      Status:        Bound
      ...output omitted...
  3. Provide a persistent volume for the current PostgreSQL deployment. Mount the postgres-pvc persistent volume claim in the /var/lib/pgsql/data path.

    1. Verify the current volumes in the PostgreSQL deployment.

      [student@workstation ~]$ oc get dc/postgresql -o yaml | \
      grep -C3 postgresql-data
              terminationMessagePolicy: File
              volumeMounts:
              - mountPath: /var/lib/pgsql/data
                name: postgresql-data
            dnsPolicy: ClusterFirst
            restartPolicy: Always
            schedulerName: default-scheduler
      --
            terminationGracePeriodSeconds: 30
            volumes:
            - emptyDir: {}
              name: postgresql-data
        test: false
        triggers:
        - imageChangeParams:

      Currently, the deployment configuration uses the ephemeral postgresql-data volume of type EmptyDir.

    2. Modify the postgresql-data volume to use the postgres-pvc persistent volume claim.

      [student@workstation ~]$ oc edit dc/postgresql
      ...output omitted...
            terminationGracePeriodSeconds: 30
            volumes:
            - persistentVolumeClaim:
                claimName: postgres-pvc
              name: postgresql-data
        test: false
        triggers:
      ...output omitted...

      Save and quit the editor session. Alternatively, you can achieve the same result by using the oc set command:

      [student@workstation ~]$ oc set volume dc/postgresql \
      --add --name=postgresql-data -t pvc --claim-name=postgres-pvc \
      --mount-path /var/lib/pgsql/data --overwrite
      deploymentconfig.apps.openshift.io/postgresql volume updated
  4. Deploy the expense-service application.

    Use the following information to deploy the application:

    Deployment name expense-service
    Container image registry.ocp4.example.com:8443/redhattraining/ocpdev-deployments-review:4.12
    Application URL http://expense-service-deployments-review.apps.ocp4.example.com

    Use the Deployment object to deploy the application.

    Additionally, the application requires the following environment variables to start:

    QUARKUS_DATASOURCE_USERNAMEPostgreSQL connection user
    QUARKUS_DATASOURCE_PASSWORDPostgreSQL connection password
    QUARKUS_DATASOURCE_JDBC_URLJDBC URL in the jdbc:postgresql://postgresql:5432/DATABASE format

    Use the secret called postgresql to inject environment variables into the expense-service deployment.

    1. Deploy the expense-service application:

      [student@workstation ~]$ oc new-app --name=expense-service \
      --image=registry.ocp4.example.com:8443/redhattraining/ocpdev-deployments-review:4.12
      --> Found container image ...output omitted...
    2. Set the environment variables to use the postgresql secret values.

      [student@workstation ~]$ oc set env deploy/expense-service \
      --from=secret/postgresql
      ...output omitted...
      deployment.apps/expense-service updated
    3. Edit the deployment and modify the environment variable names to fit the application requirements.

      [student@workstation ~]$ oc edit deploy/expense-service
      ...output omitted...
      spec:
        containers:
        - env:
          - name: DATABASE_NAME
            valueFrom:
              secretKeyRef:
                key: database-name
                name: postgresql
          - name: QUARKUS_DATASOURCE_PASSWORD
            valueFrom:
              secretKeyRef:
                key: database-password
                name: postgresql
          - name: QUARKUS_DATASOURCE_USERNAME
            valueFrom:
              secretKeyRef:
                key: database-user
                name: postgresql
      ...output omitted...

      Save the changes and exit your editor to apply the changes.

    4. Use the DATABASE_NAME environment variable to construct the QUARKUS_DATASOURCE_JDBC_URL environment variable.

      [student@workstation ~]$ oc set env deploy/expense-service \
      QUARKUS_DATASOURCE_JDBC_URL='jdbc:postgresql://postgresql:5432/$(DATABASE_NAME)'
      deployment.apps/expense-service updated
    5. Verify the environment variable configuration.

      [student@workstation ~]$ oc describe deploy/expense-service | \
      grep -A4 Environment
      Environment:
        DATABASE_NAME:                <set to the key 'database-name' in secret 'postgresql'>      Optional: false
        QUARKUS_DATASOURCE_PASSWORD:  <set to the key 'database-password' in secret 'postgresql'>  Optional: false
        QUARKUS_DATASOURCE_USERNAME:  <set to the key 'database-user' in secret 'postgresql'>      Optional: false
        QUARKUS_DATASOURCE_JDBC_URL:  jdbc:postgresql://postgresql:5432/$(DATABASE_NAME)
    6. Expose the expense-service service.

      [student@workstation ~]$ oc expose svc expense-service
      route.route.openshift.io/expense-service exposed
    7. Test that the application responds to requests.

      [student@workstation ~]$ curl -s \
      expense-service-deployments-review.apps.ocp4.example.com/expenses | jq
      [
        {
          "id": 5,
          "amount": 15.00,
          "associateId": 1,
          "name": "Phone",
          "paymentMethod": "CASH",
          "uuid": "4ee81cc3-83ab-1ef2-523b-d67f87793255"
        },
      ...output omitted...
  5. Configure liveness and readiness probes for the application deployment. Use the /q/health/live endpoint for liveness probe, and the /q/health/ready endpoint for readiness probe.

    Both probes should succeed after one successful call, and fail after one unsuccessful call, they should also set the timeout to 1 second, use 5 seconds initial delay, and activate every 5 seconds.

    Finally, both probes should use the 8080 port.

    1. Configure the liveness probe.

      [student@workstation ~]$ oc set probe deploy/expense-service \
      --liveness --get-url=http://:8080/q/health/live --timeout-seconds=1 \
      --initial-delay-seconds=5 --success-threshold=1 --failure-threshold=1 \
      --period-seconds=5
      deployment.apps/expense-service probes updated
    2. Configure the readiness probe.

      [student@workstation ~]$ oc set probe deploy/expense-service \
      --readiness --get-url=http://:8080/q/health/ready --timeout-seconds=1 \
      --initial-delay-seconds=5 --success-threshold=1 --failure-threshold=1 \
      --period-seconds=5
      deployment.apps/expense-service probes updated
    3. Verify probes in the deployment.

      [student@workstation ~]$ oc describe deploy/expense-service | \
      grep "http-get"
      Liveness:    http-get http://:8080/q/health/live delay=5s timeout=1s period=5s #success=1 #failure=1
      Readiness:   http-get http://:8080/q/health/ready delay=5s timeout=1s period=5s #success=1 #failure=1

Evaluation

As the student user on the workstation machine, use the lab command to grade your work. Correct any reported failures and rerun the command until successful.

[student@workstation ~]$ lab grade deployments-review

Finish

As the student user on the workstation machine, use the lab command to complete this exercise. This step is important to ensure that resources from previous exercises do not impact upcoming exercises.

[student@workstation ~]$ lab finish deployments-review

Revision: do288-4.12-0d49506