Bookmark this page

Chapter 2.  Backup, Restore, and Migration of Applications with OADP

Abstract

Goal

Back up and restore application settings and data with OpenShift API for Data Protection (OADP).

Sections
  • Export and Import Application Data and Settings (and Guided Exercise)

  • OADP Operator Deployment and Features (and Guided Exercise)

  • Backup and Restore with OADP (and Guided Exercise)

Lab
  • Backup, Restore, and Migration of Applications with OADP

Export and Import Application Data and Settings

Objectives

  • Export and import application data and settings between projects.

Export and Import a Kubernetes Application

The ability to export and import a Kubernetes application is useful in many scenarios. The following list describes some of these scenarios:

  • Partial or full restoration to a previous working state

  • Disaster recovery and business continuity

  • Migration from one cluster to another

  • Duplication on multiple environments

Backup and restore procedures are critical for an organization to recover from data loss or corruption. The most common data loss scenarios are hardware failures, cyberattacks, software bugs, or human errors.

A Kubernetes application backup includes all the needed resources to restore that application to a previous working state. This backup must include the following artifacts:

  • Kubernetes resources that define the application and its settings.

  • Container images in the internal registry that containers of this application use.

  • Data that is stored in persistent volumes or object storage for a stateful application.

Note

Application data or configuration that is hosted on external services such as Database-as-a-Service or object storage might be required but is outside the scope of this chapter.

Red Hat OpenShift and Red Hat partners provide data protection solutions for a faster recovery plan. The following list includes examples of data protection solutions:

  • OpenShift API for Data Protection

  • Veeam Kasten K10

  • Storware Backup and Recovery

  • IBM Spectrum Protect Plus

  • Pure Storage Portworx Backup

Backing Up Application Resources

A Kubernetes application has many resources. The export and import include the following steps:

  • List all required resources for the application.

  • Export the listed resources.

  • Clean the exported resource files.

  • Deploy the cleaned resource files.

The first step to export an application is to list all the application resources.

Note

The oc get all command lists only a subset of resources in a project and does not show resources such as secrets, configuration maps, and so on.

You do not need to list all resources in a project. For example, a MySQL application might require the following resources:

  • Deployment

  • Service

  • Secret

You can list these resources by using the oc get command.

[user@host ~]$ oc get deployment,svc,secret
NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mysql       1/1     1            1           112s

NAME            TYPE          CLUSTER-IP        EXTERNAL-IP   PORT(S)    AGE
service/mysql   ClusterIP     172.30.210.241    <none>        9001/TCP   115s

NAME                            TYPE                                  DATA   AGE
secret/builder-dockercfg-skqdg  kubernetes.io/dockercfg               1      2m16s
secret/builder-token-cpmxf      kubernetes.io/service-account-token   4      2m16s
...output omitted...
secret/mysql-credentials        Opaque                                3      60s

A MySQL application might use persistent volume claim for data. The required resources for application export depend on the application.

Export Application Resources

You can export an application object to a YAML file by using the oc get command.

[user@host ~]$ oc -n prod get deployment/mysql -o yaml > backup_deployment.yaml

The backup_deployment.yaml file has all deployment details such as specifications, metadata, and status.

[user@host ~]$ cat backup_deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "2"
  labels:
    app: mysql
  name: mysql
  namespace: prod
...output omitted...
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  selector:
    matchLabels:
      app: mysql
      ...output omitted...
status:
  availableReplicas: 1
...output omitted...

You can export all other application resources by using the same oc get command.

Import Application Resources

You can create the application resources in a new project by using the backup YAML files. You can remove the metadata.namespace field and use the oc create -f command to create each resource in the new project.

Apart from the metadata.namespace field, all backup resource files have runtime and status information such as metadata.annotations, metadata.creationTimestamp, metadata.resourceVersion, metadata.generation, and status. You can remove these fields and keep cleaned backup resource files to create resources in a new project.

The following example demonstrates importing some required resources for an application:

[user@host ~]$ oc create -f clean_backup_deployment.yaml -n prod-backup
deployment.apps/mysql created

[user@host ~]$ oc create -f service-mysql.yaml -n prod-backup
The Service "mysql" is invalid: spec.clusterIPs: Invalid value: []string{"172.30.210.241"}: failed to allocate IP 172.30.210.241: provided IP is already allocated

The mysql deployment was created successfully in the prod-backup project by using the backup YAML file.

However, the metadata and status removal does not work for all resources. Some resources require additional modifications.

The service resource creation failed because the IP address is already allocated to the mysql service in the prod project. The service backup YAML file requires more modification for a successful creation.

The modification depends on the resource type. Service resource creation requires the removal of not only the metadata and status fields but also the spec.clusterIP field from the YAML file.

You can use a text editor or other tools to remove these fields. For example, you can use the yq tool to process the YAML file and remove a specific field.

[user@host ~]$ cat service-mysql.yaml \
  | yq d - metadata.namespace \
  | yq d - spec.clusterIP*  > clean-service-mysql.yaml

[user@host ~]$ oc create -f clean-service-mysql.yaml -n prod-backup
service/mysql created

Note

The yq tool is a command-line YAML processor that is similar to the jq tool for JSON files. The d option removes the specified field from the output.

See the references section for more information about the yq tool.

Similar to the service resource, the route resource creation requires the removal of the metadata.namespace and spec.host fields from the YAML file.

[user@host ~]$ cat route-frontend.yaml \
  | yq d - metadata.namespace \
  | yq d - spec.host  > clean-route-frontend.yaml

[user@host ~]$ oc create -f clean-route-frontend.yaml -n prod-backup
route.route.openshift.io/etherpad created

You do not need to create all the exported resources. The new project creates some resources, such as service accounts, secrets, and role bindings. An application also creates some resources. The deployment resource creates the replicasets resource, and the deploymentconfig resource creates the replicationcontrollers resource.

Backing Up Container Images

You can export container images by using container tools, such as podman or skopeo, to copy the images from one registry to another. For more details about Podman and Skopeo, refer to the DO188: Red Hat OpenShift Development I: Introduction to Containers with Podman training course.

Because the OpenShift internal registry is accessible only from within the cluster by default, you can use a Kubernetes job to export the images to a remote location. To access the registry from outside the cluster or to export images from your local machine, an OpenShift administrator must expose the registry externally.

Expose OpenShift Internal Registry

You can configure the OpenShift internal registry operator to expose the registry externally with the following command:

[user@host ~]$ oc patch \
  configs.imageregistry.operator.openshift.io/cluster \
  --patch '{"spec":{"defaultRoute":true}}' \
  --type merge

Note

This action requires the cluster-admin role.

The modification to the image registry operator triggers a redeployment of the OpenShift API server. It can take up to 10 minutes for the cluster to stabilize.

The operator creates a default-route route to expose externally the registry that uses the following URL format:

default-route-openshift-image-registry.<apps-domain>

Note

You can use a custom hostname for the registry if needed. See the references section for more information about exposing the internal registry.

You can use the following command to retrieve the registry URL from the created route and save it in an environment variable for later use:

[user@host ~]$ REGISTRY=$(oc get \
  route default-route \
  -n openshift-image-registry \
  --template '{{.spec.host}}')

OpenShift users who do not have access to the openshift-image-registry namespace can retrieve the registry URL from any image stream:

[user@host ~]$ oc -n openshift get is/cli \
  -ojsonpath="{.status.publicDockerImageRepository}{'\n'}"

default-route-openshift-image-registry.apps.ocp4.example.com/openshift/cli

You can then log in to the internal registry by using your OpenShift username and authentication token with the following command:

[user@host ~]$ podman login \
  -u $(oc whoami) \
  -p $(oc whoami -t) \
  --tls-verify=false \
  $REGISTRY

Note

If the cluster's default ingress certificate is not trusted, then you must use the --tls-verify=false option to skip the certificate verification.

To configure a trusted certificate with the Ingress Operator, refer to the Setting a custom default certificate section in the Ingress Operator in OpenShift Container Platform chapter in the Red Hat OpenShift Container Platform 4.14 Networking documentation at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/networking/index#nw-ingress-setting-a-custom-default-certificate_configuring-ingress

Alternatively, you can use the oc registry login command to log in to the internal registry without the need to specify your credential and registry URL. The oc registry login command automatically uses your authentication token and the internal registry URL from the OpenShift cluster.

The oc command stores the credentials in the ${HOME}/.docker/config.json file in Base64 format. The Podman, Skopeo, and Docker clients can use the authentication details from that file to access an image registry.

[user@host ~]$ oc registry login
info: Using registry public hostname default-route-openshift-image-registry.apps.ocp4.example.com
Warning: the default reading order of registry auth file will be changed from "${HOME}/.docker/config.json" to podman registry config locations in the future version of oc. "${HOME}/.docker/config.json" is deprecated, but can still be used for storing credentials as a fallback. See https://github.com/containers/image/blob/main/docs/containers-auth.json.5.md for the order of podman registry config locations.
Saved credentials for default-route-openshift-image-registry.apps.ocp4.example.com

Note

The use of ${HOME}/.docker/config.json file by the oc command is deprecated and will be changed to the ${XDG_RUNTIME_DIR}/containers/auth.json file to store credentials in a future version. Although Podman and Skopeo can use the credentials from both files, the Docker client uses only the first file.

You can safely ignore the warning from the oc registry login command that mentions this deprecation, because the rest of the chapter uses only Podman and Skopeo.

Export Container Images

To export the image from outside the cluster, expose the internal registry and use the skopeo command to copy the image to a remote registry:

[user@host ~]$ skopeo copy \
  docker://${REGISTRY}/project_name/imagestream:tag \ 1
  docker://remote-registry.example.com/path/to/image:remotetag 2

1

Fully qualified source image in the OpenShift internal registry

2

Destination registry URL with image and tag information

Skopeo can copy images to other locations such as a local directory or a .tar archive. See the references section for detailed use of the skopeo copy command.

Note

You can use the skopeo sync command to copy all the available tags in an image. See the references section for more information about the skopeo sync command.

If Skopeo is not available on your system, then you can use Podman or Docker to pull the image in the local container registry. You can then export the image as a .tar file with the podman save command or push the image to a remote registry.

[user@host ~]$ podman pull \
  ${REGISTRY}/project_name/imagestream:tag
[user@host ~]$ podman save \
  ${REGISTRY}/project_name/imagestream:tag \
  | bzip2 > image_backup.tar.bz2

Alternatively, you can use the oc image mirror command to copy images to a local or remote location, similar to the skopeo copy command:

[user@host ~]$ oc image mirror \
  ${REGISTRY}/project_name/imagestream:* \ 1
  remote-registry.example.com/path/to/image

1

You can use the wildcard character (*) to copy all the tags to the destination registry.

Note

The oc client that matches your OpenShift version is available in the openshift/cli:latest container image that is included in the OpenShift internal registry.

See the references section for additional examples of the oc image mirror command.

To export a container image from within the cluster, you can use any available container tools in a pod to copy the image to the location of your choice, such as a persistent volume on NFS storage, S3 storage, or a remote registry.

You can use the following YAML file as an example to create a Kubernetes job that exports the container image to a persistent volume by using the OpenShift client:

apiVersion: batch/v1
kind: Job
metadata:
  name: backup-image
  namespace: application
  labels:
    app: backup
spec:
  backoffLimit: 1
  template:
    metadata:
      labels:
        app: backup
    spec:
      containers:
      - name: backup
        image: image-registry.openshift-image-registry.svc:5000/openshift/cli:latest
        env:
        - name: REGISTRY_AUTH_FILE
          value: /tmp/dockercfg.json
        command: ["/bin/bash", "-c"]
        args:
        - | 1
          oc registry login
          oc image mirror \
            image-registry.openshift-image-registry.svc:5000/application/myapp:* \
            file://myapp --dir /backup
        volumeMounts: 2
          - mountPath: /backup
            name: backup
...output omitted...
      volumes: 3
      - name: backup
        persistentVolumeClaim:
          claimName: backup-volume

1

Log in to the internal registry and export all tags of the myapp image stream to the /backup path.

2

Volume mount definition for the backup location.

3

Volume definition for the backup persistent volume claims (PVC).

Note

You need the system:image-puller role on the OpenShift project to pull images from any image streams in that project. Project users and administrators already have this permission, as well as the default service account.

Import Container Images

You can import an image to an image stream in any of your projects by using the same container tools as for the export. Similar to the export, you can use the following skopeo command to import an image from a remote repository:

[user@host ~]$ skopeo copy \
  docker://remote-registry.example.com/myimage:latest \
  docker://${REGISTRY}/project_name/mynewimagestream:latest

Note

If the image stream does not exist, then OpenShift creates it when the image is pushed.

You can also use the oc image mirror command to copy images in a similar way to the skopeo copy command:

[user@host ~]$ oc image mirror \
  remote-registry.example.com/myimage:latest \
  ${REGISTRY}/project_name/mynewimagestream:latest

You can use Podman or Docker to import an image, although the process uses more steps, because you must import the image to the local container engine registry and rename the image to use the new registry name.

The following commands import an image in the Podman local registry from a compressed archive and push it to the OpenShift internal registry:

[user@host ~]$ podman load \
  -i image_backup.tar.bz2
...output omitted...
Storing signatures
Loaded image(s): registry.apps.ocp4.example.com/application/myapp:1.2.3

If needed, rename the image with the podman tag command to match the new OpenShift project and image stream:

[user@host ~]$ podman tag \
  registry.apps.ocp4.example.com/application/myapp:1.2.3 \
  ${REGISTRY}/newproject/myapp:1.2.3

Then, send the image to the OpenShift internal registry:

[user@host ~]$ podman push \
  ${REGISTRY}/newproject/myapp:1.2.3
...output omitted...
Writing manifest to image destination
Storing signatures

Note

You need the system:image-pusher role on the OpenShift project to push images to any image streams in that project. Project users and administrators already have this permission, as well as the builder service account.

Backing Up Application Data

Several methods of data backup exist, depending on the application. Some applications provide dedicated tools or procedures to achieve the most reliable data protection and consistency. Different consistency levels can be achieved depending on the backup method.

Inconsistent backup

A backup is called inconsistent when the application alters data during the backup process. Traditional data copying when the application is running creates inconsistent backups.

Crash-consistent backup

A crash-consistent backup is created by suspending disk I/O during the backup, either by using snapshot technology or specialized tools, to ensure data consistency on disk. Application data in memory or pending I/O operations are not captured. The state of the application is kept as if the application was suddenly shut down due to power loss, or crashed.

Application-consistent backup

Application-consistent backup is the most reliable type of backup because it ensures that all in-memory data and pending I/O operations are written on disk before creating the backup.

Some applications provide a set of tools to flush memory data to disk and to pause file system operations on demand. These tools provide an application-consistent backup without any downtime by using snapshots, and is also known as hot backup.

For example, a MySQL database provides the FLUSH TABLES WITH READ LOCK statement to flush all operations to disk and to lock all tables before taking a snapshot. You can then unlock tables with the UNLOCK TABLES statement after the snapshot is created.

Database applications often come with specialized tools to create and restore backups without stopping the application or using volume snapshots. The following list includes examples of specialized tools:

  • mysqldump for MySQL and MariaDB

  • pg_dump for PostgreSQL

Note

Creating backups with specialized tools is out of the scope of this course.

A more universal way to create application-consistent backup is to stop the application, copy the data to another location, and then restart the application. This method is also called cold backup, because the application is down during the operation.

Depending on the amount of data, the application can be unavailable for a long time during the backup operation. By using volume snapshot, you can reduce this downtime to only a few minutes and use the cloned volume to back up while the application is back online.

Volume Snapshot

Volume snapshot capability is available only with Container Storage Interface (CSI) drivers. However, snapshot functions are not implemented for all CSI drivers. The following list includes examples of CSI drivers with the snapshot capability:

  • AWS Elastic Block Storage efs.csi.aws.com

  • Azure Disk disk.csi.azure.com

  • CephFS cephfs.csi.ceph.com

  • Ceph RBD rbd.csi.ceph.com

  • NetApp csi.trident.netapp.io

Kubernetes provides similar API resources to PersistentVolume and PersistentVolumeClaim to create and manage volume snapshots.

VolumeSnapshotClass

Similar to the storage class for a persistent volume claim, a volume snapshot class describes the CSI driver and associated settings to create a volume snapshot.

Note

The VolumeSnapshotClass driver must match the StorageClass provisioner of the source PVC.

The following commands list all available storage and volume snapshot classes:

[user@host ~]$ oc get volumesnapshotclasses
NAME                                        DRIVER
ocs-storagecluster-cephfsplugin-snapclass   openshift-storage.cephfs.csi.ceph.com
ocs-storagecluster-rbdplugin-snapclass      openshift-storage.rbd.csi.ceph.com
[user@host ~]$ oc get storageclasses | egrep "^NAME|csi"
NAME                                   PROVISIONER
ocs-external-storagecluster-cephfs     openshift-storage.cephfs.csi.ceph.com
ocs-external-storagecluster-ceph-rbd   openshift-storage.rbd.csi.ceph.com
VolumeSnapshot

Similar to the PersistentVolumeClaim resource, a VolumeSnapshot resource requests the creation of a snapshot.

The following example is a YAML file for creating a volume snapshot:

apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: my-snapshot 1
  namespace: application 2
spec:
  volumeSnapshotClassName: ocs-storagecluster-rbdplugin-snapclass 3
  source:
    persistentVolumeClaimName: application-data 4

1

Name of the volume snapshot.

2

Namespace of the volume snapshot. It must be the same as the source PVC.

3

Snapshot class name for the volume snapshot.

4

Name of the source PVC that is used for the snapshot.

The following command lists volume snapshots. A snapshot is successfully created when the READYTOUSE attribute is set to true and a VolumeSnapshotContent resource is created:

[user@host ~]$ oc get volumesnapshot
NAME         READYTOUSE   SOURCEPVC          ...  SNAPSHOTCONTENT
my-snapshot  true         application-data   ...  snapcontent-798...cf6

Note

When creating an application-consistent backup, the application must be quiesced or scaled down before the snapshot creation. The application can safely be resumed or scaled up after the snapshot is created and ready to use.

VolumeSnapshotContent

Similar to the PersistentVolume resource, a VolumeSnapshotContent resource represents a snapshot that a VolumeSnapshot resource created.

After the snapshot is created, it can function as a source to create a PVC and for a pod to use it. The following example is a YAML file for creating a persistent volume from a snapshot:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-snapshot-volume 1
  namespace: application 2
spec:
  storageClassName: ocs-external-storagecluster-ceph-rbd 3
  accessModes:
  - ReadWriteOnce
  dataSource:
    apiGroup: snapshot.storage.k8s.io
    kind: VolumeSnapshot
    name: my-snapshot 4
  resources:
    requests:
      storage: 1Gi 5

1

Name of the PVC.

2

Namespace of the PVC. It must be the same as the snapshot namespace.

3

Storage class name for the PVC.

4

Name of the snapshot.

5

Size of the new volume. Must be equal to or greater than the snapshot size.

Export Application Data

If your application provides specialized backup tools, you can use them to export the data to your chosen location. The following example is a cron job definition to back up a MariaDB database and to store the backup file on AWS S3 storage:

apiVersion: batch/v1
kind: CronJob
metadata:
  name: backup-db
spec:
  schedule: "0 0 * * *"
  jobTemplate:
    spec:
      template:
        spec:
          initContainers:
            - envFrom:
              - secretRef:
                  name: mariadb
              image: quay.io/redhattraining/mariadb:10.5
              command: ["/bin/bash", "-c"]
              args: 1
                - >
                  mariadb-dump -u "${MARIADB_USER}" -p"${MARIADB_PASSWORD}"
                  -h mariadb "${MARIADB_DATABASE}"
                  | bzip2 > /backup/backup-$(date '+%Y%m%d-%H%M').sql.bz2;
                  ls -al /backup
              name: backup
              volumeMounts:
                - mountPath: /backup
                  name: backup
          containers:
            - image: docker.io/amazon/aws-cli:latest
              command: ["/bin/bash", "-c"]
              args: 2
                - >
                  aws s3 cp --no-progress /backup/backup* s3://backup/
              name: s3cli
              volumeMounts:
                - mountPath: /backup
                  name: backup
                - mountPath: /root/.aws
                  name: aws-creds
          volumes:
            - name: backup
              emptyDir: {}
            - name: aws-creds
              secret:
                secretName: s3config

1

Use the mariadb-dump tool to export the database to a compressed file in an ephemeral volume.

2

Use the AWS CLI tool to send the backup file to AWS S3-compatible storage.

If the application does not provide backup tools, then you can use volume snapshot to back up the application data. Depending on the storage provider that is available in the OpenShift cluster, you might rather export the snapshot content to an external storage location.

The following example is a job definition that uses an existing snapshot volume to archive the snapshot content, and exports the backup file to a remote S3 bucket:

apiVersion: batch/v1
kind: Job
metadata:
  name: backup
  namespace: application
  labels:
    app: backup
spec:
  backoffLimit: 1
  template:
    metadata:
      labels:
        app: backup
    spec:
      containers:
      - name: backup
        image: docker.io/d3fk/s3cmd:latest
        command: ["/bin/sh", "-c"]
        args:
        - | 1
          tar czf -C /snapshot /tmp/mybackup.tar.gz .
          s3cmd cp /tmp/mybackup.tar.gz s3://backup/
        volumeMounts: 2
          - mountPath: /snapshot
            name: snapshot
...output omitted...
      volumes: 3
      - name: snapshot
        persistentVolumeClaim:
          claimName: my-snapshot-volume

1

Archive the snapshot content and copy the archive to a remote S3 bucket.

2

Volume mount definition for the snapshot data in the container.

3

Volume definition for the snapshot PVC.

Important

If the volume storage class does not support volume snapshot, then you can mount the application volume instead to export the data. In this case, you must ensure that no other pods are using the volume during the backup, to avoid data inconsistencies.

Another option is to export the snapshot content locally from a pod by using the oc cp command. The following example is a pod definition, where the snapshot data is mounted, which you can use to export the data to your workstation:

apiVersion: v1
kind: Pod
metadata:
  name: export
spec:
  containers:
  - image: registry.access.redhat.com/ubi9:latest
    command: ["/bin/bash", "-c"]
    args:
    - sleep infinity 1
...output omitted...
    volumeMounts: 2
      - mountPath: /snapshot
        name: snapshot
  volumes: 3
  - name: snapshot
    persistentVolumeClaim:
      claimName: my-snapshot-volume

1

The sleep infinity command ensures that the pod stays alive during the manual export.

2

Volume mount definition for the snapshot data in the container.

3

Volume definition for the snapshot PVC.

You can use the following oc command to copy the snapshot data from the export pod to your local machine:

[user@host ~]$ oc cp export:/snapshot /tmp/backup

Note

The tar binary must be installed in the remote container for the oc cp command to work.

After the snapshot content is exported to a remote location, you can safely remove the snapshot PVC and the volume snapshot, to free up space on the storage back end.

Important

If your application uses more than one persistent volume, then you must track the backup names and the volumes that they belong to. You need that information to restore each backup to the correct persistent volume.

Import Application Data

To import the data to another cluster or namespace, create a pod or a job where the application volume is mounted, and copy the exported data to the pod.

The following example is a YAML file for creating a job that fetches a remote backup from an S3 bucket and extracts the archive to the application volume:

apiVersion: batch/v1
kind: Job
metadata:
  name: restore
  namespace: application
  labels:
    app: restore
spec:
  backoffLimit: 1
  template:
    metadata:
      labels:
        app: restore
    spec:
      containers:
      - name: restore
        image: docker.io/d3fk/s3cmd:latest
        command: ["/bin/bash", "-c"]
        args: 1
        - |
          s3cmd cp s3://backup/mybackup.tar.gz /tmp/
          tar xvzf /tmp/mybackup.tar.gz -C /data
        volumeMounts: 2
          - mountPath: /data
            name: application-data
...output omitted...
      volumes: 3
      - name: application-data
        persistentVolumeClaim:
          claimName: application-data

1

Install the s3cmd tool, fetch the backup from the S3 bucket, and extract the archive inside the application data volume.

2

Volume mount definition for the application data in the container.

3

Volume definition for the application PVC.

You can restore the data to an existing volume, or create a new one. If you use a new volume, then you must update the application to use that new volume.

Important

To avoid data corruption, ensure that the application that uses the volume is not running during the restoration procedure.

References

skopeo-copy man page

skopeo-sync man page

yq documentation

oc image mirror Usage Examples

For accessing the internal registry, refer to the Accessing the Registry chapter in the Red Hat OpenShift Container Platform 4.14 Registry documentation at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/registry/index#accessing-the-registry

For exposing the internal registry, refer to the Exposing the Registry chapter in the Red Hat OpenShift Container Platform 4.14 Registry documentation at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/registry/index#securing-exposing-registry

For more information about Kubernetes Volume Snapshots, refer to the Volume Snapshots section in the Kubernetes documentation at https://kubernetes.io/docs/concepts/storage/volume-snapshots

Revision: do380-4.14-397a507