Bookmark this page

Guided Exercise: OADP Operator Deployment and Features

Deploy the OADP operator and perform a backup to validate that OADP is functional in a cluster.

Outcomes

  • Install and configure the OpenShift API for Data Protection operator.

  • Back up an application by using OADP to validate the configuration.

As the student user on the workstation machine, use the lab command to prepare your environment for this exercise, and to ensure that all required resources are available.

[student@workstation ~]$ lab start backup-oadp

Instructions

Install and configure the OADP operator with the CSI snapshot and Data Mover feature.

Create an S3-compatible object storage bucket with OpenShift Data Foundation and configure it as a backup storage location.

To validate the configuration, back up the application in the production project by using OADP and verify that the backup is successfully created in the S3 bucket.

Important

All other exercises in this chapter depend on your cluster being correctly configured to use OpenShift API for Data Protection.

If you cannot complete this exercise, then delete and re-create your lab environment. After re-creating your lab environment, you can either attempt this exercise again, or the start script for another exercise can configure OADP for you.

  1. As the admin user, locate and navigate to the OpenShift web console.

    1. Log in to your OpenShift cluster as the admin user.

      [student@workstation ~]$ oc login -u admin -p redhatocp \
        https://api.ocp4.example.com:6443
      Login successful.
      
      ...output omitted...
    2. Identify the URL for the OpenShift web console.

      [student@workstation ~]$ oc whoami --show-console
      https://console-openshift-console.apps.ocp4.example.com
    3. Open a web browser and navigate to https://console-openshift-console.apps.ocp4.example.com.

    4. Click Red Hat Identity Management and log in as the admin user with the redhatocp password.

  2. Install the OADP operator.

    1. Click OperatorsOperatorHub. In the Filter by keyword field, type OADP to locate the OADP operator, and then click OADP Operator.

    2. The web console displays information about the OADP operator. Select the stable-1.2 channel and the 1.2.1 version. Click Install to proceed to the Install Operator page.

    3. Click Install to install the operator.

      Note

      In the lab environment, the OADP operator has two available update channels: stable-1.2 and stable-1.3. Ensure that you select the stable-1.2 channel.

      The OADP operator can be installed only in a single namespace, which is named openshift-adp by default.

    4. Wait until the installation is complete and the web console displays ready for use.

  3. Install the VolSync operator. This operator is a requirement to enable Data Mover in OADP 1.2.

    1. Click OperatorsOperatorHub. In the Filter by keyword field, type VolSync to locate the VolSync operator, and then click VolSync.

    2. The web console displays information about the VolSync operator. Click Install to proceed to the Install Operator page.

    3. Click Install to install the operator with the default options.

    4. Wait until the installation is complete and the web console displays ready for use.

  4. Create an S3-compatible object storage bucket for OADP to store backups.

    1. From the terminal, change to the openshift-adp project.

      [student@workstation ~]$ oc project openshift-adp
      Now using project "openshift-adp" on server "https://api.ocp4.example.com:6443".
    2. Change to the ~/DO380/labs/backup-oadp directory.

      [student@workstation ~]$ cd ~/DO380/labs/backup-oadp
      [student@workstation backup-oadp]$
    3. Create an object bucket claim by using the following values:

      AttributeValue
      Name backup
      Namespace openshift-adp
      storageClassName openshift-storage.noobaa.io
      generateBucketName backup

      You can use the resource definition in the ~/DO380/labs/backup-oadp/obc-backup.yml path.

      apiVersion: objectbucket.io/v1alpha1
      kind: ObjectBucketClaim
      metadata:
        name: backup
        namespace: openshift-adp
      spec:
        storageClassName: openshift-storage.noobaa.io
        generateBucketName: backup
      [student@workstation backup-oadp]$ oc apply -f obc-backup.yml
      objectbucketclaim.objectbucket.io/backup created

      Important

      This environment uses ODF as a back end for both application data storage and backup storage for simplicity. In a disaster situation, both data and backups can be lost at the same time.

      For production environment, Red Hat recommends using a different storage location for backups and for application data.

    4. Verify that the object bucket claim is created and in the Bound phase.

      [student@workstation backup-oadp]$ oc get obc
      NAME    STORAGE-CLASS                PHASE
      backup  openshift-storage.noobaa.io  Bound
  5. Retrieve the S3 bucket information and credentials. You can use a second terminal for this step to make it easier to copy and paste values in the remainder of this section.

    When an object bucket claim is created, Red Hat OpenShift Data Foundation creates a matching secret and configuration map with the same name that contains the bucket information and credentials.

    1. Retrieve the bucket name and bucket host from the generated configuration map.

      [student@workstation backup-oadp]$ oc extract --to=- cm/backup
      # BUCKET_HOST
      s3.openshift-storage.svc
      # BUCKET_NAME
      backup-7d9...f4c
      # BUCKET_PORT
      443
      # BUCKET_REGION
      
      # BUCKET_SUBREGION

      Note

      The s3.openshift-storage.svc service uses a TLS certificate that is signed with the self-signed service CA that OpenShift manages. To prevent a certificate signed by unknown authority error, you must include the CA certificate in the OADP configuration.

    2. Retrieve the service CA certificate for the s3.openshift-storage.svc endpoint. This certificate is available in the openshift-service-ca.crt configuration map in any namespace.

      Encode the certificate in Base64 format and save the value for the next step.

      [student@workstation backup-oadp]$ oc get cm/openshift-service-ca.crt \
        -o jsonpath='{.data.service-ca\.crt}' | base64 -w0; echo
      ...output omitted...
    3. Retrieve the bucket credentials from the generated secret.

      [student@workstation backup-oadp]$ oc extract --to=- secret/backup
      # AWS_ACCESS_KEY_ID
      JNmbmhD0AQ3BFMABtXC4
      # AWS_SECRET_ACCESS_KEY
      xjkwp8bXeJazTgC4u/WJTbzgiD0tfWGt8OtdADLz

      Note

      The values would be different in your environment.

    4. Identify the public URL for the S3 endpoint from the s3 route in the openshift-storage namespace. You use this URL to connect to the S3 bucket from the workstation machine in the next step.

      [student@workstation backup-oadp]$ oc get route s3 -n openshift-storage
      NAME   HOST/PORT                                    ...
      s3     s3-openshift-storage.apps.ocp4.example.com   ...
  6. Configure the s3cmd command to browse and validate the object storage configuration.

    1. Create a .s3cfg configuration file in the student home directory with the S3 credentials and S3 endpoint URL from the previous step.

      You can use the sample configuration file in the ~/DO380/labs/backup-oadp/s3cfg path.

      [student@workstation backup-oadp]$ vim ~/.s3cfg
      access_key = AWS_ACCESS_KEY_ID
      secret_key = AWS_SECRET_ACCESS_KEY
      host_base = s3-openshift-storage.apps.ocp4.example.com
      host_bucket = s3-openshift-storage.apps.ocp4.example.com/%(bucket)s
      signature_v2 = True

      Note

      Replace the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY values with the values from the previous step.

    2. List the content of the bucket by using the s3cmd command to validate the configuration. Use the bucket name from previous steps.

      [student@workstation backup-oadp]$ s3cmd la

      Note

      Because the bucket is empty at this stage, the command returns an empty line. The command returns an error message if the configuration is incorrect.

  7. Prepare the DataProtectionApplication configuration in a new dpa-backup.yml YAML file.

    Enable both the csi and vsm plug-ins and the Data Mover feature for the CSI snapshot.

    Use the S3 bucket information from the previous steps to configure the default backup storage location.

    You can find an example of the resource definition in the ~/DO380/labs/backup-oadp/dpa-backup.yml file.

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: oadp-backup
      namespace: openshift-adp
    spec:
      features:
        dataMover:
          enable: true
          credentialName: restic-enc-key 1
      configuration:
        velero:
          defaultPlugins:
            - aws
            - openshift
            - csi
            - vsm
      backupLocations:
        - velero:
            config:
              profile: "default"
              region: "us-east-1"
              s3Url: https://s3.openshift-storage.svc
              s3ForcePathStyle: "true" 2
              insecureSkipTLSVerify: "true" 3
            provider: aws
            default: true
            credential:
              key: cloud
              name:  cloud-credentials 4
            objectStorage:
              bucket: backup-9e2...20b 5
              prefix: oadp
              caCert: LLS0S0...LS0tLS0K 6

    1

    Secret with the encryption key for the backup of the persistent volume. You create the restic-enc-key secret in a following step.

    2

    The s3ForcePathStyle attribute must be set to true when using ODF.

    3

    The insecureSkipTLSVerify attribute must be set to true when using a self-signed certificate for the S3 endpoint.

    4

    Secret with the S3 credentials. You create the cloud-credentials secret in a following step.

    5

    Use the bucket name from the previous step.

    6

    Use the service CA certificate in Base64 format from the previous step.

  8. Create the cloud-credentials secret in the openshift-adp namespace with the backup object bucket credentials.

    1. Create a cloud-credentials file with the object bucket credentials. You can use the configuration example in the ~/DO380/labs/backup-oadp/cloud-credentials file.

      [default]
      aws_access_key_id=<AWS_ACCESS_KEY_ID>
      aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>

      Note

      Replace the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY values with the values from the previous step.

    2. Create the cloud-credentials secret with the cloud-credentials file content.

      [student@workstation backup-oadp]$ oc create secret generic \
        cloud-credentials \
        -n openshift-adp \
        --from-file cloud=cloud-credentials
      secret/cloud-credentials created
  9. Create the restic-enc-key secret in the openshift-adp namespace with the encryption key to use for the application data.

    You can use the openssl command to generate a random password to use as the encryption key.

    [student@workstation backup-oadp]$ oc create secret generic \
      restic-enc-key \
      -n openshift-adp \
      --from-literal=RESTIC_PASSWORD=$(openssl rand -base64 24)
    secret/restic-enc-key created
  10. Apply the OADP configuration by using the dpa-backup.yml YAML file from an earlier step.

    [student@workstation backup-oadp]$ oc apply -f dpa-backup.yml
    dataprotectionapplication.oadp.openshift.io/oadp-backup created
  11. Verify that both velero and volume-snapshot-mover deployment objects are created in the openshift-adp namespace and in the Ready state.

    [student@workstation backup-oadp]$ oc get deploy -n openshift-adp
    NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
    openshift-adp-controller-manager   1/1     1            1           24h
    velero                             1/1     1            1           25s
    volume-snapshot-mover              1/1     1            1           25s
  12. Verify that the backupStorageLocation object is created and in the Available phase.

    [student@workstation backup-oadp]$ oc get backupStorageLocation
    NAME            PHASE       LAST VALIDATED   AGE   DEFAULT
    oadp-backup-1   Available   26s              16m   true

    Note

    It can take a minute for the backupStorageLocation resource to enter the Available phase. If the resource is stuck in the Unavailable phase, then use the oc describe backupStorageLocation command to get the status and error messages that relate to the configuration.

  13. Configure all available volume snapshot classes for OADP.

    1. List all available volume snapshot classes.

      [student@workstation backup-oadp]$ oc get volumesnapshotclass
      NAME                                                 ...
      ocs-external-storagecluster-cephfsplugin-snapclass   ...
      ocs-external-storagecluster-rbdplugin-snapclass      ...
    2. For each volume snapshot class, set the deletionPolicy attribute to Retain.

      [student@workstation backup-oadp]$ for class in \
        $(oc get volumesnapshotclass -oname); do
        oc patch $class --type=merge -p '{"deletionPolicy": "Retain"}'
        done
      .../ocs-external-storagecluster-cephfsplugin-snapclass patched
      .../ocs-external-storagecluster-rbdplugin-snapclass patched
    3. For each volume snapshot class, set the velero.io/csi-volumesnapshot-class label to true.

      [student@workstation backup-oadp]$ oc label volumesnapshotclass \
        velero.io/csi-volumesnapshot-class="true" --all
      .../ocs-external-storagecluster-cephfsplugin-snapclass labeled
      .../ocs-external-storagecluster-rbdplugin-snapclass labeled
  14. Back up the production project to validate the OADP configuration.

    You can use the resource definition in the ~/DO380/labs/backup-oadp/backup.yml file.

    apiVersion: velero.io/v1
    kind: Backup
    metadata:
      name: backup-production
      namespace: openshift-adp
    spec:
      includedNamespaces:
      - production
    [student@workstation backup-oadp]$ oc apply -f backup.yml
    backup.velero.io/backup-production created
  15. Wait a couple of minutes and check the backup completion status.

    1. Verify that the backup object is in the Completed phase.

      [student@workstation backup-oadp]$ oc describe backup backup-production
      ...output omitted...
      Status:
        Backup Item Operations Attempted:  1
        Backup Item Operations Completed:  1
        Completion Timestamp:              2023-09-07T16:29:25Z
        Csi Volume Snapshots Attempted:    1
        Csi Volume Snapshots Completed:    1
        Expiration:                        2023-10-07T16:28:10Z
        Format Version:                    1.1.0
        Phase:                             Completed
        Start Timestamp:                   2023-09-07T16:28:10Z
        Version:                           1
      Events:                              <none>
    2. Use the s3cmd command to review the content of the S3 storage.

      [student@workstation backup-oadp]$ s3cmd la -r
      2023-09-07 17:08    2597  s3://backup-f5..c6/docker/registry/v2/.../data 1
      ...output omitted...
      2023-09-07 17:12  168730  s3://backup-f5..c6/oadp/backups/...tar.gz 2
      ...output omitted...
      2023-09-07 17:12     183  s3://backup-f5..c6/openshift-adp/backup-... 3

      1

      The /docker/registry path contains the container images.

      2

      The /oadp/backup path contains the Kubernetes resources.

      3

      The /openshift-adp path contains the encrypted volume snapshots.

  16. Clean up the resources.

    1. Change to the home directory.

      [student@workstation backup-oadp]$ cd
      [student@workstation ~]$
    2. Use the velero command to remove the backup and all associated resources.

      [student@workstation ~]$ oc exec deployment/velero \
        -c velero -it -- \
        ./velero delete backup backup-production
      Are you sure you want to continue (Y/N)? y
      Request to delete backup "backup-production" submitted successfully.
      The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed.
    3. Wait a couple of minutes and verify that the backup object is deleted.

      [student@workstation ~]$ oc get backup
      No resources found in openshift-adp namespace.
    4. Use the s3cmd command to clean up the object storage bucket. Use the bucket name from previous steps.

      [student@workstation ~]$ s3cmd rm -r --force \
        s3://backup-7d9d5169-6c8d-4b40-bbba-931ddaeb6f4c/
      ...output omitted...

Finish

On the workstation machine, use the lab command to complete this exercise. This step is important to ensure that resources from previous exercises do not impact upcoming exercises.

[student@workstation ~]$ lab finish backup-oadp

Revision: do380-4.14-397a507