Deploy the OADP operator and perform a backup to validate that OADP is functional in a cluster.
Outcomes
Install and configure the OpenShift API for Data Protection operator.
Back up an application by using OADP to validate the configuration.
As the student user on the workstation machine, use the lab command to prepare your environment for this exercise, and to ensure that all required resources are available.
[student@workstation ~]$ lab start backup-oadp
Instructions
Install and configure the OADP operator with the CSI snapshot and Data Mover feature.
Create an S3-compatible object storage bucket with OpenShift Data Foundation and configure it as a backup storage location.
To validate the configuration, back up the application in the production project by using OADP and verify that the backup is successfully created in the S3 bucket.
All other exercises in this chapter depend on your cluster being correctly configured to use OpenShift API for Data Protection.
If you cannot complete this exercise, then delete and re-create your lab environment. After re-creating your lab environment, you can either attempt this exercise again, or the start script for another exercise can configure OADP for you.
As the admin user, locate and navigate to the OpenShift web console.
Log in to your OpenShift cluster as the admin user.
[student@workstation ~]$ oc login -u admin -p redhatocp \
https://api.ocp4.example.com:6443
Login successful.
...output omitted...Identify the URL for the OpenShift web console.
[student@workstation ~]$ oc whoami --show-console
https://console-openshift-console.apps.ocp4.example.comOpen a web browser and navigate to https://console-openshift-console.apps.ocp4.example.com.
Click and log in as the admin user with the redhatocp password.
Install the OADP operator.
Click → .
In the field, type OADP to locate the OADP operator, and then click .
![]() |
The web console displays information about the OADP operator.
Select the stable-1.2 channel and the 1.2.1 version.
Click to proceed to the page.
![]() |
Click to install the operator.
In the lab environment, the OADP operator has two available update channels: stable-1.2 and stable-1.3.
Ensure that you select the stable-1.2 channel.
The OADP operator can be installed only in a single namespace, which is named openshift-adp by default.
Wait until the installation is complete and the web console displays ready for use.
![]() |
Install the VolSync operator. This operator is a requirement to enable Data Mover in OADP 1.2.
Click → .
In the field, type VolSync to locate the VolSync operator, and then click .
![]() |
The web console displays information about the VolSync operator. Click to proceed to the page.
![]() |
Click to install the operator with the default options.
Wait until the installation is complete and the web console displays ready for use.
Create an S3-compatible object storage bucket for OADP to store backups.
From the terminal, change to the openshift-adp project.
[student@workstation ~]$ oc project openshift-adp
Now using project "openshift-adp" on server "https://api.ocp4.example.com:6443".Change to the ~/DO380/labs/backup-oadp directory.
[student@workstation ~]$ cd ~/DO380/labs/backup-oadp
[student@workstation backup-oadp]$Create an object bucket claim by using the following values:
| Attribute | Value |
|---|---|
| Name |
backup
|
| Namespace |
openshift-adp
|
| storageClassName |
openshift-storage.noobaa.io
|
| generateBucketName |
backup
|
You can use the resource definition in the ~/DO380/labs/backup-oadp/obc-backup.yml path.
apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name:backupnamespace:openshift-adpspec: storageClassName:openshift-storage.noobaa.iogenerateBucketName:backup
[student@workstation backup-oadp]$ oc apply -f obc-backup.yml
objectbucketclaim.objectbucket.io/backup createdThis environment uses ODF as a back end for both application data storage and backup storage for simplicity. In a disaster situation, both data and backups can be lost at the same time.
For production environment, Red Hat recommends using a different storage location for backups and for application data.
Verify that the object bucket claim is created and in the Bound phase.
[student@workstation backup-oadp]$oc get obcNAME STORAGE-CLASS PHASE backup openshift-storage.noobaa.ioBound
Retrieve the S3 bucket information and credentials. You can use a second terminal for this step to make it easier to copy and paste values in the remainder of this section.
When an object bucket claim is created, Red Hat OpenShift Data Foundation creates a matching secret and configuration map with the same name that contains the bucket information and credentials.
Retrieve the bucket name and bucket host from the generated configuration map.
[student@workstation backup-oadp]$oc extract --to=- cm/backup# BUCKET_HOSTs3.openshift-storage.svc# BUCKET_NAMEbackup-7d9...f4c# BUCKET_PORT 443 # BUCKET_REGION # BUCKET_SUBREGION
The s3.openshift-storage.svc service uses a TLS certificate that is signed with the self-signed service CA that OpenShift manages.
To prevent a certificate signed by unknown authority error, you must include the CA certificate in the OADP configuration.
Retrieve the service CA certificate for the s3.openshift-storage.svc endpoint.
This certificate is available in the openshift-service-ca.crt configuration map in any namespace.
Encode the certificate in Base64 format and save the value for the next step.
[student@workstation backup-oadp]$ oc get cm/openshift-service-ca.crt \
-o jsonpath='{.data.service-ca\.crt}' | base64 -w0; echo
...output omitted...Retrieve the bucket credentials from the generated secret.
[student@workstation backup-oadp]$oc extract --to=- secret/backup# AWS_ACCESS_KEY_IDJNmbmhD0AQ3BFMABtXC4# AWS_SECRET_ACCESS_KEYxjkwp8bXeJazTgC4u/WJTbzgiD0tfWGt8OtdADLz
The values would be different in your environment.
Identify the public URL for the S3 endpoint from the s3 route in the openshift-storage namespace.
You use this URL to connect to the S3 bucket from the workstation machine in the next step.
[student@workstation backup-oadp]$oc get route s3 -n openshift-storageNAME HOST/PORT ... s3s3-openshift-storage.apps.ocp4.example.com...
Configure the s3cmd command to browse and validate the object storage configuration.
Create a .s3cfg configuration file in the student home directory with the S3 credentials and S3 endpoint URL from the previous step.
You can use the sample configuration file in the ~/DO380/labs/backup-oadp/s3cfg path.
[student@workstation backup-oadp]$vim ~/.s3cfgaccess_key =secret_key =AWS_ACCESS_KEY_IDhost_base =AWS_SECRET_ACCESS_KEYs3-openshift-storage.apps.ocp4.example.comhost_bucket =s3-openshift-storage.apps.ocp4.example.com/%(bucket)s signature_v2 = True
Replace the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY values with the values from the previous step.
List the content of the bucket by using the s3cmd command to validate the configuration.
Use the bucket name from previous steps.
[student@workstation backup-oadp]$ s3cmd laBecause the bucket is empty at this stage, the command returns an empty line. The command returns an error message if the configuration is incorrect.
Prepare the DataProtectionApplication configuration in a new dpa-backup.yml YAML file.
Enable both the csi and vsm plug-ins and the Data Mover feature for the CSI snapshot.
Use the S3 bucket information from the previous steps to configure the default backup storage location.
You can find an example of the resource definition in the ~/DO380/labs/backup-oadp/dpa-backup.yml file.
apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name:oadp-backupnamespace:openshift-adpspec: features: dataMover: enable:truecredentialName:restic-enc-keyconfiguration: velero: defaultPlugins: - aws - openshift -
csi-vsmbackupLocations: - velero: config: profile: "default" region: "us-east-1" s3Url:https://s3.openshift-storage.svcs3ForcePathStyle:"true"insecureSkipTLSVerify:
"true"provider: aws default: true credential: key:
cloudname:cloud-credentialsobjectStorage: bucket:
backup-9e2...20bprefix:
oadpcaCert:LLS0S0...LS0tLS0K
Secret with the encryption key for the backup of the persistent volume.
You create the | |
The | |
The | |
Secret with the S3 credentials.
You create the | |
Use the bucket name from the previous step. | |
Use the service CA certificate in Base64 format from the previous step. |
Create the cloud-credentials secret in the openshift-adp namespace with the backup object bucket credentials.
Create a cloud-credentials file with the object bucket credentials.
You can use the configuration example in the ~/DO380/labs/backup-oadp/cloud-credentials file.
[default] aws_access_key_id=<AWS_ACCESS_KEY_ID>aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
Replace the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY values with the values from the previous step.
Create the cloud-credentials secret with the cloud-credentials file content.
[student@workstation backup-oadp]$ oc create secret generic \
cloud-credentials \
-n openshift-adp \
--from-file cloud=cloud-credentials
secret/cloud-credentials createdCreate the restic-enc-key secret in the openshift-adp namespace with the encryption key to use for the application data.
You can use the openssl command to generate a random password to use as the encryption key.
[student@workstation backup-oadp]$ oc create secret generic \
restic-enc-key \
-n openshift-adp \
--from-literal=RESTIC_PASSWORD=$(openssl rand -base64 24)
secret/restic-enc-key createdApply the OADP configuration by using the dpa-backup.yml YAML file from an earlier step.
[student@workstation backup-oadp]$ oc apply -f dpa-backup.yml
dataprotectionapplication.oadp.openshift.io/oadp-backup createdVerify that both velero and volume-snapshot-mover deployment objects are created in the openshift-adp namespace and in the Ready state.
[student@workstation backup-oadp]$oc get deploy -n openshift-adpNAME READY UP-TO-DATE AVAILABLE AGE openshift-adp-controller-manager 1/1 1 1 24hvelero1/11 1 25svolume-snapshot-mover1/11 1 25s
Verify that the backupStorageLocation object is created and in the Available phase.
[student@workstation backup-oadp]$oc get backupStorageLocationNAME PHASE LAST VALIDATED AGE DEFAULT oadp-backup-1Available26s 16m true
It can take a minute for the backupStorageLocation resource to enter the Available phase.
If the resource is stuck in the Unavailable phase, then use the oc describe backupStorageLocation command to get the status and error messages that relate to the configuration.
Configure all available volume snapshot classes for OADP.
List all available volume snapshot classes.
[student@workstation backup-oadp]$ oc get volumesnapshotclass
NAME ...
ocs-external-storagecluster-cephfsplugin-snapclass ...
ocs-external-storagecluster-rbdplugin-snapclass ...For each volume snapshot class, set the deletionPolicy attribute to Retain.
[student@workstation backup-oadp]$ for class in \
$(oc get volumesnapshotclass -oname); do
oc patch $class --type=merge -p '{"deletionPolicy": "Retain"}'
done
.../ocs-external-storagecluster-cephfsplugin-snapclass patched
.../ocs-external-storagecluster-rbdplugin-snapclass patchedFor each volume snapshot class, set the velero.io/csi-volumesnapshot-class label to true.
[student@workstation backup-oadp]$ oc label volumesnapshotclass \
velero.io/csi-volumesnapshot-class="true" --all
.../ocs-external-storagecluster-cephfsplugin-snapclass labeled
.../ocs-external-storagecluster-rbdplugin-snapclass labeledBack up the production project to validate the OADP configuration.
You can use the resource definition in the ~/DO380/labs/backup-oadp/backup.yml file.
apiVersion: velero.io/v1 kind: Backup metadata: name:backup-productionnamespace: openshift-adp spec: includedNamespaces: -production
[student@workstation backup-oadp]$ oc apply -f backup.yml
backup.velero.io/backup-production createdWait a couple of minutes and check the backup completion status.
Verify that the backup object is in the Completed phase.
[student@workstation backup-oadp]$ oc describe backup backup-production
...output omitted...
Status:
Backup Item Operations Attempted: 1
Backup Item Operations Completed: 1
Completion Timestamp: 2023-09-07T16:29:25Z
Csi Volume Snapshots Attempted: 1
Csi Volume Snapshots Completed: 1
Expiration: 2023-10-07T16:28:10Z
Format Version: 1.1.0
Phase: Completed
Start Timestamp: 2023-09-07T16:28:10Z
Version: 1
Events: <none>Use the s3cmd command to review the content of the S3 storage.
[student@workstation backup-oadp]$s3cmd la -r2023-09-07 17:08 2597 s3://backup-f5..c6/docker/registry/v2/.../data...output omitted... 2023-09-07 17:12 168730 s3://backup-f5..c6/oadp/backups/...tar.gz
...output omitted... 2023-09-07 17:12 183 s3://backup-f5..c6/openshift-adp/backup-...
Clean up the resources.
Change to the home directory.
[student@workstation backup-oadp]$ cd
[student@workstation ~]$Use the velero command to remove the backup and all associated resources.
[student@workstation ~]$oc exec deployment/velero \ -c velero -it -- \ ./velero delete backup backup-productionAre you sure you want to continue (Y/N)?yRequest to delete backup "backup-production" submitted successfully. The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed.
Wait a couple of minutes and verify that the backup object is deleted.
[student@workstation ~]$ oc get backup
No resources found in openshift-adp namespace.Use the s3cmd command to clean up the object storage bucket.
Use the bucket name from previous steps.
[student@workstation ~]$ s3cmd rm -r --force \
s3://backup-7d9d5169-6c8d-4b40-bbba-931ddaeb6f4c/
...output omitted...