Export all application resources and data for a project, and import them to another project to create a functional copy of a live application.
Outcomes
Export an OpenShift application, and include the settings, container images, and data.
Restore an application to a different namespace.
As the student user on the workstation machine, use the lab command to prepare your environment for this exercise, and to ensure that all required resources are available.
[student@workstation ~]$ lab start backup-export
Instructions
The development team published a new version of the etherpad application in the OpenShift internal registry and wants to deploy it in production.
Before the deployment of the new version, your company requires you to create and restore a backup of the production application into a new stage project and to verify data integrity.
The application to back up is in the production project.
Connect to the production etherpad application and create a pad.
You use this pad later in this exercise to validate the application restoration.
Open a web browser and navigate to https://etherpad-production.apps.ocp4.example.com.
Create a pad named backup and click .
Add a line to the pad with the current date and time, followed by Production backup started.
The application automatically saves changes to the pad. Close the browser tab.
Review the resources in the production project.
Open a terminal on the workstation and log in to the OpenShift cluster as the developer user with the developer password.
[student@workstation ~]$ oc login -u developer -p developer \
https://api.ocp4.example.com:6443
Login successful.
You have one project on this server: "production"
Using project "production".List the resources in the production project with the oc get all command.
[student@workstation ~]$ oc get all
NAME READY STATUS RESTARTS AGE
pod/etherpad-6f9598bbb5-tmnt8 1/1 Running 0 109s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/etherpad ClusterIP 172.30.210.241 <none> 9001/TCP 111s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/etherpad 1/1 1 1 109s
NAME DESIRED CURRENT READY AGE
replicaset.apps/etherpad-6f9598bbb5 1 1 1 109s
NAME ... TAGS UPDATED
imagestream.image.openshift.io/etherpad ... 1.8.18,1.9.1 109s
NAME ... PORT TERMINATION WILDCARD
route.route.openshift.io/etherpad ... http edge/Redirect NoneThe oc get all command shows only a subset of all available resources in a namespace, and might omit some resources that the application requires.
The etherpad application deployment requires the following resource types:
PersistentVolumeClaim
Service
Route
Deployment
Use the oc get command to list these resource types in the production project, and compare the result with the previous step.
[student@workstation ~]$ oc get pvc,svc,route,deployment
NAME STATUS ... AGE
persistentvolumeclaim/etherpad Bound ... 43m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/etherpad ClusterIP 172.30.96.26 <none> 9001/TCP 43m
NAME ... PORT TERMINATION WILDCARD
route.route.openshift.io/etherpad ... http edge/Redirect None
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/etherpad 1/1 1 1 43mExport the Kubernetes resources from the previous step and save them in the ~/DO380/labs/backup-export/production directory.
Change to the ~/DO380/labs/backup-export directory.
[student@workstation ~]$ cd ~/DO380/labs/backup-export
[student@workstation backup-export]$Create a production directory.
[student@workstation backup-export]$ mkdir productionExport the persistent volume claim resource to a YAML file named 01-pvc.yml.
Clean up the YAML file by removing the following fields:
metadata.annotations
metadata.creationTimestamp
metadata.namespace
metadata.finalizers
metadata.resourceVersion
metadata.uid
spec.volumeName
status
[student@workstation backup-export]$ oc get pvc etherpad -o yaml \
| yq d - metadata.annotations \
| yq d - metadata.creationTimestamp \
| yq d - metadata.namespace \
| yq d - metadata.finalizers \
| yq d - metadata.resourceVersion \
| yq d - metadata.uid \
| yq d - spec.volumeName \
| yq d - status \
> production/01-pvc.ymlExport the deployment resource to a YAML file named 02-deployment.yml.
Clean up the YAML file by removing the following fields:
metadata.annotations
metadata.creationTimestamp
metadata.namespace
metadata.resourceVersion
metadata.uid
metadata.generation
status
[student@workstation backup-export]$ oc get deploy etherpad -oyaml \
| yq d - metadata.annotations \
| yq d - metadata.creationTimestamp \
| yq d - metadata.namespace \
| yq d - metadata.resourceVersion \
| yq d - metadata.uid \
| yq d - metadata.generation \
| yq d - status \
> production/02-deployment.ymlExport the service resource to a YAML file named 03-service.yml.
Clean up the YAML file by removing the following fields:
metadata.annotations
metadata.creationTimestamp
metadata.namespace
metadata.resourceVersion
metadata.uid
spec.clusterIP
spec.clusterIPs
status
[student@workstation backup-export]$ oc get svc etherpad -oyaml \
| yq d - metadata.annotations \
| yq d - metadata.creationTimestamp \
| yq d - metadata.namespace \
| yq d - metadata.resourceVersion \
| yq d - metadata.uid \
| yq d - spec.clusterIP* \
| yq d - status \
> production/03-service.ymlExport the route resource to a YAML file named 04-route.yml.
Clean up the YAML file by removing the following fields:
metadata.annotations
metadata.creationTimestamp
metadata.namespace
metadata.resourceVersion
metadata.uid
spec.host
status
[student@workstation backup-export]$ oc get route etherpad -oyaml \
| yq d - metadata.annotations \
| yq d - metadata.creationTimestamp \
| yq d - metadata.namespace \
| yq d - metadata.resourceVersion \
| yq d - metadata.uid \
| yq d - spec.host \
| yq d - status \
> production/04-route.ymlReview the exported resource files.
[student@workstation backup-export]$ tree production
production/
├── 01-pvc.yml
├── 02-deployment.yml
├── 03-service.yml
└── 04-route.yml
0 directories, 4 filesAs the admin user, expose the internal registry to enable users to export and import container images.
Log in to the OpenShift cluster as the admin user with the redhatocp password.
[student@workstation backup-export]$ oc login -u admin -p redhatocp \
https://api.ocp4.example.com:6443
Login successful.
...output omitted...Expose the internal registry.
[student@workstation backup-export]$ oc patch \
configs.imageregistry.operator.openshift.io/cluster \
--patch '{"spec":{"defaultRoute":true}}' --type merge
config.imageregistry.operator.openshift.io/cluster patchedWait until the openshift-apiserver operator is redeployed.
It can take a couple of minutes.
[student@workstation backup-export]$watch -n10 oc get co openshift-apiserverNAME VERSION AVAILABLE PROGRESSING ... MESSAGE openshift-apiserver 4.12.10 True True ... APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation openshift-apiserver 4.12.10 True True ... APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation openshift-apiserver 4.12.10 TrueFalse...
Press Ctrl+C to exit the watch command.
As the developer user, export all container images that are referenced in the etherpad image stream and save them in the ~/DO380/labs/backup-export/production directory.
Log in to the OpenShift cluster as the developer user with the developer password.
[student@workstation backup-export]$ oc login -u developer -p developer \
https://api.ocp4.example.com:6443
Login successful.
You have one project on this server: "production"
Using project "production".Log in to the internal registry by using the oc registry login command.
[student@workstation backup-export]$ oc registry login
info: Using registry public hostname default-route-openshift-image-registry.apps.ocp4.example.com
...output omitted...
Saved credentials for default-route-openshift-image-registry.apps.ocp4.example.comIdentify the fully qualified image name of the etherpad image and save it as the IMAGE environment variable.
[student@workstation backup-export]$ IMAGE=$(oc get is etherpad \
--template '{{.status.publicDockerImageRepository}}')[student@workstation backup-export]$ echo $IMAGE
default-route-openshift-image-registry.apps.ocp4.example.com/production/etherpadExport all container images from the etherpad image stream by using the oc image mirror command.
[student@workstation backup-export]$ oc image mirror \
$IMAGE file://etherpad --dir=production
...output omitted...
sha256:057c...2513 file://etherpad:1.9.1
sha256:7265...8ec1 file://etherpad:1.8.18
info: Mirroring completed in 2.96s (116.3MB/s)The oc image mirror stores the exported images in the v2 directory.
Review the exported images in the production directory.
[student@workstation backup-export]$ tree production
production/
...output omitted...
└── v2
└── etherpad
├── blobs
│ ├── sha256:057c...a2513
...output omitted...
└── manifests
├── 1.8.18 -> sha256:7265...28ec1
├── 1.9.1 -> sha256:057c...2513
├── sha256:057c...2513
└── sha256:7265...28ec1
4 directories, 38 filesCreate a snapshot of the persistent volume to limit the downtime of the application during the backup operation.
List the persistent volumes and identify the associated storage class.
[student@workstation backup-export]$oc get pvcNAME STATUS ... STORAGECLASS AGEetherpadBound ...ocs-external-storagecluster-ceph-rbd25m
Identify the driver that is used for the ocs-external-storagecluster-ceph-rbd storage class.
[student@workstation backup-export]$oc get storageclass \ ocs-external-storagecluster-ceph-rbdNAME PROVISIONER ... ocs-external-storagecluster-ceph-rbdopenshift-storage.rbd.csi.ceph.com...
Get the volume snapshot storage class name from the openshift-storage.rbd.csi.ceph.com driver.
[student@workstation backup-export]$oc get volumesnapshotclasses \ | egrep "^NAME|openshift-storage.rbd.csi.ceph.com"NAME ...ocs-external-storagecluster-rbdplugin-snapclass...
Scale down the application.
[student@workstation backup-export]$ oc scale deploy/etherpad --replicas 0
deployment.apps/etherpad scaledVerify that no application pods are running.
[student@workstation backup-export]$ oc get pods
No resources found in production namespace.Create a volume snapshot of the etherpad persistent volume by using the volume snapshot class from the previous steps.
You can use the template in the ~/DO380/labs/backup-export/volumesnapshot.yml path.
apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name:etherpadspec: volumeSnapshotClassName:ocs-external-storagecluster-rbdplugin-snapclasssource: persistentVolumeClaimName:etherpad
[student@workstation backup-export]$ oc apply -f volumesnapshot.yml
volumesnapshot.snapshot.storage.k8s.io/etherpad configuredYou can use the solution in the ~/DO380/solutions/backup-export/volumesnapshot-etherpad.yml file.
Verify that the volume snapshot is created and ready to use.
[student@workstation backup-export]$oc get volumesnapshotNAME READYTOUSE SOURCEPVC ... CREATIONTIME AGE etherpadtrueetherpad ... 12s 13s
Scale up the application.
[student@workstation backup-export]$ oc scale deploy/etherpad --replicas 1
deployment.apps/etherpad scaledExport the application data from the volume snapshot and save it in the ~/DO380/labs/backup-export/production directory.
Create a persistent volume claim by using the data from the volume snapshot.
You can use the template in the ~/DO380/labs/backup-export/pvc-snapshot.yml path.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: etherpad-snapshot
labels:
app: etherpad-snapshot
spec:
storageClassName: ocs-external-storagecluster-ceph-rbd
accessModes:
- ReadWriteOnce
dataSource:
apiGroup: snapshot.storage.k8s.io
kind: VolumeSnapshot
name: etherpad
resources:
requests:
storage: 1Gi[student@workstation backup-export]$ oc apply -f pvc-snapshot.yml
persistentvolumeclaim/etherpad-snapshot createdYou can use the solution in the ~/DO380/solutions/backup-export/pvc-snapshot.yml file.
Verify that the persistent volume claim status is Bound.
[student@workstation backup-export]$oc get pvc etherpad-snapshotNAME STATUS ... STORAGECLASS AGE etherpad-snapshotBound... ocs-external-storagecluster-ceph-rbd 117s
Mount the snapshot volume on /snapshot in a new pod named export-snapshot.
You can use the template in the ~/DO380/labs/backup-export/pod-snapshot.yml file.
apiVersion: v1
kind: Pod
metadata:
name: export-snapshot
labels:
app: export-snapshot
spec:
containers:
...output omitted...
volumeMounts:
- mountPath: /snapshot
name: snapshot
terminationGracePeriodSeconds: 2
volumes:
- name: snapshot
persistentVolumeClaim:
claimName: etherpad-snapshot[student@workstation backup-export]$ oc apply -f pod-snapshot.yml
pod/export-snapshot createdYou can use the solution in the ~/DO380/solutions/backup-export/pod-snapshot.yml file.
Verify that the export-snapshot pod is running.
[student@workstation backup-export]$oc get pods export-snapshotNAME READY STATUS RESTARTS AGEexport-snapshot1/1Running0 31s
Copy the snapshot data from the pod to the workstation in the production/data directory.
[student@workstation backup-export]$ oc cp \
export-snapshot:/snapshot production/data
tar: Removing leading `/' from member namesReview the exported data.
[student@workstation backup-export]$ tree production
production/
...output omitted...
├── data
│ ├── dirty.db
│ ├── lost+found
│ ├── minified_1f08...0f7e
│ ├── minified_1f08...0f7e.gz
│ ├── minified_5a47...57ca
│ ├── minified_5a47...57ca.gz
│ ├── minified_c24d...5b91
│ └── minified_c24d...5b91.gz
...output omitted...
6 directories, 45 filesDelete the export-snapshot pod.
[student@workstation backup-export]$ oc delete pod/export-snapshot
pod "export-snapshot" deletedImport the application container to the stage project.
Create the stage project.
[student@workstation backup-export]$ oc new-project stage
Now using project "stage" on server "https://api.ocp4.example.com:6443".
...output omitted...Import the etherpad container images to the stage project by using the oc image mirror command.
[student@workstation backup-export]$ oc image mirror \
--dir=production file://etherpad \
default-route-openshift-image-registry.apps.ocp4.example.com/stage/etherpad
...output omitted...
sha256:7265...8ec1 default-route.../stage/etherpad:1.8.18
sha256:057c...2513 default-route.../stage/etherpad:1.9.1
info: Mirroring completed in 4.98s (69.07MB/s)Review the etherpad image stream.
[student@workstation backup-export]$oc describe is/etherpad...output omitted... Image Repository: default-route.../stage/etherpad Image Lookup: local=false Unique Images: 2 Tags: 21.9.1no spec tag *image-registry...stage/etherpad@sha256:057c...251324 seconds ago1.8.18no spec tag *image-registry...stage/etherpad@sha256:7265...8ec138 seconds ago
Import the production application data to the stage project.
Import the persistent volume claim definition.
[student@workstation backup-export]$ oc apply -n stage -f production/01-pvc.yml
persistentvolumeclaim/etherpad createdVerify that the persistent volume claim status is Bound.
[student@workstation backup-export]$oc get pvcNAME STATUS VOLUME ... AGE etherpadBoundpvc-1d2139df-8969-4dc8-affd-c473e301480f ... 4s
Mount the volume on /data in a new pod named restore-snapshot.
You can use the template in the ~/DO380/labs/backup-export/pod-restore.yml file.
apiVersion: v1
kind: Pod
metadata:
name: restore-snapshot
labels:
app: restore-snapshot
spec:
containers:
...output omitted...
volumeMounts:
- mountPath: /data
name: data
terminationGracePeriodSeconds: 2
volumes:
- name: data
persistentVolumeClaim:
claimName: etherpad[student@workstation backup-export]$ oc apply -f pod-restore.yml
pod/restore-snapshot createdYou can use the solution in the ~/DO380/solutions/backup-export/pod-restore.yml file.
Verify that the restore-snapshot pod is running.
[student@workstation backup-export]$oc get po restore-snapshotNAME READY STATUS RESTARTS AGErestore-snapshot1/1Running0 65s
Copy the backup data from the workstation machine to the /data directory in the pod.
[student@workstation backup-export]$ oc rsync \
--no-perms \
./production/data/ \
restore-snapshot:/data/
sending incremental file list
dirty.db
minified_5dbe...03cb
minified_5dbe...03cb.gz
minified_8214...cacf
minified_8214...cacf.gz
minified_bd66...b58f
minified_bd66...b58f.gz
sent 776,429 bytes received 150 bytes 1,553,158.00 bytes/sec
total size is 775,441 speedup is 1.00Review the imported data.
[student@workstation backup-export]$ oc exec \
restore-snapshot \
-- ls -1 /data
dirty.db
lost+found
minified_5dbe...03cb
minified_5dbe...03cb.gz
minified_8214...cacf
minified_8214...cacf.gz
minified_bd66...b58f
minified_bd66...b58f.gzDelete the restore-snapshot pod.
[student@workstation backup-export]$ oc delete pod/restore-snapshot
pod "restore-snapshot" deletedImport the application Kubernetes resources to the stage project.
Create a copy of the deployment YAML file and name it stage-deployment.yml.
[student@workstation backup-export]$ cp production/02-deployment.yml \
stage-deployment.ymlModify the stage-deployment.yml file with the following parameters:
| Parameter | Value |
|---|---|
| Container image |
image-registry.openshift-image-registry.svc:5000/stage/etherpad:1.8.18
|
TITLE environment variable |
DO380 - stage etherpad
|
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: etherpad
name: etherpad
spec:
...output omitted...
spec:
containers:
- env:
- name: TITLE
value: DO380 - stage etherpad
- name: DEFAULT_PAD_TEXT
value: Add some content and write your ideas
- name: SUPPRESS_ERRORS_IN_PAD_TEXT
value: "true"
- name: EXPOSE_VERSION
value: "true"
image: image-registry.openshift-image-registry.svc:5000/stage/etherpad:1.8.18
...output omitted...
initContainers:
- args:
...output omitted...
image: image-registry.openshift-image-registry.svc:5000/stage/etherpad:1.8.18
imagePullPolicy: IfNotPresent
name: clean
...output omitted...The TITLE environment variable defines the instance name for this application.
It is also the name of the browser window when accessing the application.
Import the deployment definition to the stage project.
[student@workstation backup-export]$ oc apply -n stage -f stage-deployment.yml
deployment.apps/etherpad createdImport the service definition to the stage project.
[student@workstation backup-export]$ oc apply -n stage \
-f production/03-service.yml
service/etherpad createdImport the route definition to the stage project.
[student@workstation backup-export]$ oc apply -n stage \
-f production/04-route.yml
route.route.openshift.io/etherpad createdReview all imported resources.
[student@workstation backup-export]$ oc get pvc,is,svc,route,deployment
NAME STATUS ...
persistentvolumeclaim/etherpad Bound ...
NAME ... TAGS UPDATED
imagestream.image.openshift.io/etherpad ... 1.8.18,1.9.1 4 minutes ago
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/etherpad ClusterIP 172.30.222.62 <none> 9001/TCP 29s
NAME HOST/PORT ...
route.route.openshift.io/etherpad etherpad-stage.apps.ocp4.example.com ...
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/etherpad 1/1 1 1 103sConnect to the stage etherpad application and open the backup pad.
Open a web browser and navigate to https://etherpad-stage.apps.ocp4.example.com. Confirm the instance name in the browser window.
Open and review the pad named backup.
If the restoration is successful, then you should see the same content from the beginning of this section.
Close the browser tab.
Clean up the resources.
Change to the home directory.
[student@workstation backup-export]$ cd
[student@workstation ~]$Log in to the OpenShift cluster as the admin user with the redhatocp password.
[student@workstation ~]$ oc login -u admin -p redhatocp \
https://api.ocp4.example.com:6443
Login successful.
...output omitted...Remove the exposed route from the OpenShift internal registry.
[student@workstation ~]$ oc patch \
configs.imageregistry.operator.openshift.io/cluster \
--patch '{"spec":{"defaultRoute":false}}' --type merge
config.imageregistry.operator.openshift.io/cluster patchedThe cluster image registry patch triggers a kube-apiserver deployment rollout.
The lab finish command might take up to 10 minutes for the cluster to stabilize.
Delete the stage project.
[student@workstation ~]$ oc delete project stage
project.project.openshift.io "stage" deleted