Bookmark this page

Implementing Storage in OpenShift Components

Objectives

After completing this section, you should be able to describe how OpenShift implements Ceph storage for each storage-related OpenShift feature.

Implementing Storage in Red Hat OpenShift Container Platform

Red Hat Data Foundation provides the storage infrastructure for Red Hat OpenShift Container Platform. To provide persistent storage resources to developers, OpenShift Container Platform uses Kubernetes object models.

Administrators can use a StorageClass resource to describe the storage types and characteristics of the cluster. Administrators can use classes to define storage needs such as QoS levels or provisioner types.

A PersistentVolume (PV) or volume resource type is a storage element in an OpenShift Container Platform cluster. PersistentVolume resources specify the type of disk, level of performance, and storage implementation type. A cluster administrator can manually create these objects, or a StorageClass resource can provide them dynamically. Resources, such as pods, can use PersistentVolume resources while maintaining lifecycle independence.

A PersistentVolumeClaim (PVC) or claim is a cluster user storage request from inside a project. PersistentVolumeClaim resources contain the requested storage and the required access mode.

Note

The StorageClass and PersistentVolume resources are cluster resources that are independent of any projects.

The following operations are the most common interactions between a PersistentVolume and PersistentVolumeClaim resources.

  • Provisioning storage. In advance, administrators can create PersistentVolume resources with different types and sizes for future storage requests. By using a StorageClass resource, you can create PersistentVolumes resources dynamically PVs have a reclaim policy, which is specified in the reclaimPolicy field of the class with a value of Delete or Retain. The default is Delete.

    When installing OpenShift Data Foundation, the following storage classes are created:

    • ocs-storagecluster-ceph-rbd

    • ocs-storagecluster-cephfs

    • ocs-storagecluster-ceph-rgw

    • openshift-storage.noobaa.io

Note

Red Hat recommends changing the default StorageClass to ocs-storagecluster-ceph-rbd backed by OpenShift Data Foundation.

  • Binding to a PersistentVolumeClaim. The PVC request specifies the storage amount, access mode, and an optional storage class. If an existing unbound PV's attributes match the PVC, then the PV binds to the PVC. If no PV matches the PVC request, then a new PV is created. PVCs can remain unbound indefinitely if a matching PV does not exist or cannot be created. Claims are bound as matching volumes become available.

  • Using volumes. A pod sees a PersistentVolume resource as a volume plug-in. When scheduling a pod, define the PersistentVolumeClaim in the volumes block. The cluster then looks for the PersistentVolume that is bound to that claim and mounts that volume. It is not recommended to use a PersistentVolume directly, because a different PersistentVolumeClaim volume might be bound at a later time.

  • Releasing a PersistentVolume. To release a volume, delete the associated PersistentVolumeClaim object. Depending on the release policy of the PersistentVolume resource, the volume can be deleted or retained. The reclaim policy can be changed at any time.

Describing PersistentVolume Access Modes

A PersistentVolume can have different read-write access options depending on the provider capabilities. Storage providers can support different access modes for a volume, but a volume can have only one access mode at a time. Access modes are listed in this table.

Access ModeShort NameDescription
ReadWriteOnce RWO The volume can be mounted as read-write by a single node.
ReadOnlyMany ROX The volume can be mounted as read-only by many nodes.
ReadWriteMany RWX The volume can be mounted as read-write by many nodes.

Volumes are matched to PersistentVolumeClaims resources with similar access modes. An exact match with access modes is preferred and is attempted first; however, the volume can have a wider access mode than the PVC requests. Similarly, a volume can be of the exact requested size or larger. In any case, the provided volume will have at least the required characteristics, but never less.

Important

Access modes are a description of the volume's access capabilities. The cluster does not enforce the claim's requested access, but permits access according to the volume's capabilities.

Introducing Rook-Ceph Toolbox

The Rook-Ceph Toolbox is a container that provides an interface to connect to the underlying Ceph Storage cluster of the OpenShift Container Storage operator. The toolbox is useful to run Ceph commands to view the cluster status, maps, and the devices that the cluster uses. The toolbox requires an existing, running Rook-Ceph cluster.

To install the toolbox, run the following command:

[cloud-user@ocp ~]$ oc patch OCSInitialization ocsinit -n openshift-storage \
--type json --patch  '[{ "op": "replace", "path": "/spec/enableCephTools", \
"value": true }]'

Verify that the container is running with the following command:

[cloud-user@ocp ~]$ oc get pods -n openshift-storage

You can run a remote shell to access the container:

[cloud-user@ocp ~]$ TOOLS_POD=$(oc get pods -n openshift-storage -l \
app=rook-ceph-tools -o name)
[cloud-user@ocp ~]$ oc rsh -n openshift-storage $TOOLS_POD
sh-4.4$ ceph status
  cluster:
    id:     0f05478d-359b-4009-942f-a099f79a490b
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum a,b,c (age 23m)
    mgr: a(active, since 23m)
    mds: ocs-storagecluster-cephfilesystem:1 {0=ocs-storagecluster-cephfilesystem-b=up:active} 1 up:standby-replay
    osd: 3 osds: 3 up (since 22m), 3 in (since 22m)
    rgw: 1 daemon active (ocs.storagecluster.cephobjectstore.a)

You can list the pools that Rook-Ceph created during the cluster creation:

sh-4.4$ ceph osd lspools
1 ocs-storagecluster-cephblockpool
2 ocs-storagecluster-cephobjectstore.rgw.control
3 ocs-storagecluster-cephfilesystem-metadata
4 ocs-storagecluster-cephfilesystem-data0
5 ocs-storagecluster-cephobjectstore.rgw.meta
6 ocs-storagecluster-cephobjectstore.rgw.log
7 ocs-storagecluster-cephobjectstore.rgw.buckets.index
8 ocs-storagecluster-cephobjectstore.rgw.buckets.non-ec
9 .rgw.root
10 ocs-storagecluster-cephobjectstore.rgw.buckets.data

Reviewing PersistentVolume Backed by Ceph RBD

The ocs-storagecluster-ceph-rbd storage class is used to create ReadWriteOnce (RWO) persistent storage in Red Hat Data Foundation. You can request an RBD volume by creating a PersistentVolumeClaim. This example and further examples run in a Red Hat Container Platform cluster with Red Hat Data Foundation installed.

To change the default StorageClass resource to ocs-storagecluster-ceph-rbd, find the current default StorageClass. Notice the default label in the name.

[cloud-user@ocp ~]$ oc get storageclass
NAME                          PROVISIONER                             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
ocs-storagecluster-ceph-rbd   openshift-storage.rbd.csi.ceph.com      Delete          Immediate              true                   100m
ocs-storagecluster-ceph-rgw   openshift-storage.ceph.rook.io/bucket   Delete          Immediate              false                  100m
ocs-storagecluster-cephfs     openshift-storage.cephfs.csi.ceph.com   Delete          Immediate              true                   100m
openshift-storage.noobaa.io   openshift-storage.noobaa.io/obc         Delete          Immediate              false                  96m
standard (default)            kubernetes.io/cinder                    Delete          WaitForFirstConsumer   true                   162m
standard-csi                  cinder.csi.openstack.org                Delete          WaitForFirstConsumer   true                   162m
[cloud-user@ocp ~]$ oc describe sc/standard
Name:                  standard
IsDefaultClass:        Yes
Annotations:           storageclass.kubernetes.io/is-default-class=true
...output omitted...

Set the value for storageclass.kubernetes.io/is-default-class to false to remove the property.

[cloud-user@ocp ~]$ oc patch storageclass standard -p '{"metadata": \
{"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}'
storageclass.storage.k8s.io/standard patched
[cloud-user@ocp ~]$ oc describe sc/standard
Name:                  standard
IsDefaultClass:        No
Annotations:           storageclass.kubernetes.io/is-default-class=false
...output omitted...

Then, set the value for storageclass.kubernetes.io/is-default-class to true to make it the default StorageClass.

[cloud-user@ocp ~]$ oc patch storageclass ocs-storagecluster-ceph-rbd -p \
'{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": \
"true"}}}'
storageclass.storage.k8s.io/ocs-storagecluster-ceph-rbd patched
[cloud-user@ocp ~]$ oc describe storageclass/ocs-storagecluster-ceph-rbd
Name:                  ocs-storagecluster-ceph-rbd
IsDefaultClass:        Yes
...output omitted...

To request a volume, a YAML file is needed with the PersistentVolumeClaim resource definition. Notice the accessModes and storage fields. Then, create the resource.

[cloud-user@ocp ~]$ cat cl260-pvc-01.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cl260-pvc-01
spec:
  storageClassName: ocs-storagecluster-ceph-rbd
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
[cloud-user@ocp ~]$ oc create -f cl260-pvc-01.yml
persistentvolumeclaim/cl260-pvc-01 created

View the PersistentVolumeClaim resource details to check whether it is bound. When the status is Bound, then view the volume resource.

[cloud-user@ocp ~]$ oc describe pvc/cl260-pvc-01
Name:          cl260-pvc-01
Namespace:     ceph-rbd-backend
StorageClass:  ocs-storagecluster-ceph-rbd
Status:        Bound
Volume:        pvc-0bf8894b-45db-4b5e-9d49-c03a1ea391fd
...output omitted...

To match the volume resource with the rbd device, inspect the VolumeHandle attribute in the PersistentVolume description.

[cloud-user@ocp ~]$ oc describe pv/pvc-0bf8894b-45db-4b5e-9d49-c03a1ea391fd
Name:            pvc-0bf8894b-45db-4b5e-9d49-c03a1ea391fd
...output omitted...
StorageClass:    ocs-storagecluster-ceph-rbd
Status:          Bound
Claim:           ceph-rbd-backend/cl260-pvc-01
...output omitted...
Capacity:        5Gi
...output omitted...
    VolumeHandle:      0001-0011-openshift-storage-0000000000000001-e39e9ebc-1032-11ec-8f56-0a580a800230
...output omitted...

To find the device in the Ceph Storage cluster, log in to the Rook-Ceph Toolbox shell and list the ocs-storagecluster-cephblockpool pool. Observe that the device name matches the second part of the VolumeHandle property of the volume resource.

[cloud-user@ocp ~]$ TOOLS_POD=$(oc get pods -n openshift-storage -l \
app=rook-ceph-tools -o name)
[cloud-user@ocp ~]$ oc rsh -n openshift-storage $TOOLS_POD
sh-4.4$ rbd ls ocs-storagecluster-cephblockpool
csi-vol-e39e9ebc-1032-11ec-8f56-0a580a800230
...output omitted...

To resize the volume, edit the YAML file and edit the storage field to the new wanted capacity. Then, apply the changes.

[cloud-user@ocp ~]$ cat cl260-pvc-01.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cl260-pvc-01
spec:
  storageClassName: ocs-storagecluster-ceph-rbd
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
[cloud-user@ocp ~]$ oc apply -f cl260-pvc-01.yml
persistentvolumeclaim/cl260-pvc-01 created

Verify the new capacity in the volume.

[cloud-user@ocp ~]$ oc describe pv/pvc-0bf8894b-45db-4b5e-9d49-c03a1ea391fd
Name:            pvc-0bf8894b-45db-4b5e-9d49-c03a1ea391fd
...output omitted...
StorageClass:    ocs-storagecluster-ceph-rbd
Status:          Bound
Claim:           ceph-rbd-backend/cl260-pvc-01
...output omitted...
Capacity:        10Gi
...output omitted...

Reviewing PersistentVolume Backed by CephFS

You can use the ocs-storagecluster-cephfs storage class to create file devices that are backed by the CephFS that Rook-Ceph manages. It is typical to create volume resources with RWX (ReadWriteMany) access mode. This access mode is used when presenting a volume to several pods.

Define ocs-storagecluster-cephfs in the storage-class field.

[cloud-user@ocp ~]$ cat cl260-pvc-02.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cl260-pvc-02
spec:
  storageClassName: ocs-storagecluster-cephfs
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
[cloud-user@ocp ~]$ oc create -f cl260-pvc-02.yml
persistentvolumeclaim/cl260-pvc-02 created
[cloud-user@ocp ~]$ oc describe pvc/cl260-pvc-02
Name:          cl260-pvc-02
Namespace:     cl260-cephfs
StorageClass:  ocs-storagecluster-cephfs
Status:        Bound
Volume:        pvc-793c06bc-4514-4c11-9272-cf6ce51996e8
...output omitted...
Capacity:      10Gi
Access Modes:  RWX
VolumeMode:    Filesystem
...output omitted...

To test the volume, create a demo application and scale the deployment to three nodes.

[cloud-user@ocp ~]$ kubectl create deployment hello-node \
--image=k8s.gcr.io/serve_hostname
deployment.apps/hello-node created

[cloud-user@ocp ~]$ oc get deployment
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
hello-node   1/1     1            1           2m54s
[cloud-user@ocp ~]$ oc scale deployment hello-node --replicas 3
deployment.apps/hello-node scaled

Then, mount the volume and verify that all nodes can see the volume.

[cloud-user@ocp ~]$ oc set volume deployment/hello-node --add \
--mount-path=/cl260-data \
--name=pvc-793c06bc-4514-4c11-9272-cf6ce51996e8
deployment.apps/hello-node volume updated
[cloud-user@ocp ~]$ oc get pods
NAME                          READY   STATUS    RESTARTS   AGE
hello-node-67754bb7bf-25xv8   1/1     Running   0          27s
hello-node-67754bb7bf-9cdk4   1/1     Running   0          31s
hello-node-67754bb7bf-lzssb   1/1     Running   0          35s
[cloud-user@ocp ~]$ oc describe pod/hello-node-67754bb7bf-25xv8
[cloud-user@ocp ~]$ oc describe pod/hello-node-67754bb7bf-25xv8 | grep -A1 Mounts
    Mounts:
      /cl260-data from pvc-793c06bc-4514-4c11-9272-cf6ce51996e8 (rw)
[cloud-user@bastion ~]$ oc describe pod/hello-node-67754bb7bf-9cdk4 | grep -A1 Mounts
    Mounts:
      /cl260-data from pvc-793c06bc-4514-4c11-9272-cf6ce51996e8 (rw)
[cloud-user@bastion ~]$ oc describe pod/hello-node-67754bb7bf-lzssb | grep -A1 Mounts
    Mounts:
      /cl260-data from pvc-793c06bc-4514-4c11-9272-cf6ce51996e8 (rw)

Reviewing Object Resources Managed by NooBaa

The Multicloud Object Gateway (MCG) or NooBaa service simplifies the interaction with object devices across cloud providers and clusters. The NooBaa dashboard provides an overview of the operator status. NooBaa provides a CLI for management tasks and for ease of use of operations. The NooBaa CLI must be installed and configured to access the backing stores.

You can view status, access, and secret keys with the noobaa backingstore status command:

[cloud-user@ocp ~]$ noobaa backingstore status noobaa-default-backing-store
INFO[0001]  Exists: BackingStore "noobaa-default-backing-store"
INFO[0001]  Exists: Secret "rook-ceph-object-user-ocs-storagecluster-cephobjectstore-noobaa-ceph-objectstore-user"
INFO[0001]  BackingStore "noobaa-default-backing-store" Phase is Ready

# BackingStore spec:
s3Compatible:
  endpoint: http://rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc:80
  secret:
    name: rook-ceph-object-user-ocs-storagecluster-cephobjectstore-noobaa-ceph-objectstore-user
    namespace: openshift-storage
...output omitted...
type: s3-compatible

# Secret data:
AccessKey: D7VHJ1I32B0LVJ0EEL9W
Endpoint: http://rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc:80
SecretKey: wY8ww5DbdOqwre8gj1HTiA0fADY61zNcX1w8z

To create an ObjectBucketClaim with NooBaa, run the following command:

[cloud-user@ocp ~]$ noobaa obc create cl260-obc-01
INFO[0001]  Exists: StorageClass "openshift-storage.noobaa.io"
INFO[0001]  Created: ObjectBucketClaim "cl260-obc-01"
...output omitted...
INFO[0040]  OBC "cl260-obc-01" Phase is "Pending"
INFO[0043]  OBC "cl260-obc-01" Phase is Bound
...output omitted...
Connection info:
  BUCKET_HOST            : s3.openshift-storage.svc
  BUCKET_NAME            : cl260-obc-01-0d1ccb90-9caa-4515-969b-0b80a3ce8cd0
  BUCKET_PORT            : 443
  AWS_ACCESS_KEY_ID      : TFV8sT9aKaxW3xfRHkwo
  AWS_SECRET_ACCESS_KEY  : XQ6TBED4LoFq5Fj1/Le+m0cGzGEaa2wmByYPbTqz
...output omitted...

When NooBaa finishes the creation, it communicates with the OpenShift Container Platform cluster and delegates the characteristics of the `ObjectBucketClaim`that is created.

[cloud-user@ocp ~]$ oc get obc
NAME           STORAGE-CLASS                 PHASE   AGE
cl260-obc-01   openshift-storage.noobaa.io   Bound   2m24s

You can review the attributes of a ObjectBucketClaim resource with the -o yaml option to query the resource definition. You can use this option to view the S3 access credentials of the resource.

[cloud-user@ocp ~]$ oc get obc cl260-obc-01 -o yaml
...output omitted...
spec:
  bucketName: cl260-obc-01-0d1ccb90-9caa-4515-969b-0b80a3ce8cd0
  generateBucketName: cl260-obc-01
  objectBucketName: obc-openshift-storage-cl260-obc-01
  storageClassName: openshift-storage.noobaa.io
status:
  phase: Bound

You can view the buckets in Rook-Ceph with the toolbox:

[cloud-user@ocp ~]$ oc rsh -n openshift-storage $TOOLS_POD
sh-4.4$ radosgw-admin bucket list
[
    "nb.1631071958739.apps.cluster-ea50.dynamic.opentlc.com",
    "rook-ceph-bucket-checker-12a9cf6b-502b-4aff-b5a3-65e5b0467437"
]
sh-4.4$ radosgw-admin bucket stats
[
...output omitted...
    {
        "bucket": "rook-ceph-bucket-checker-12a9cf6b-502b-4aff-b5a3-65e5b0467437",
        "num_shards": 11,
        "tenant": "",
        "zonegroup": "68a58f7b-d282-467f-b28d-d862b4c98e1d",
        "placement_rule": "default-placement",
...output omitted...
        "bucket_quota": {
            "enabled": false,
            "check_on_raw": false,
            "max_size": -1,
            "max_size_kb": 0,
            "max_objects": -1
        }
    }
]

OpenShift Container Platform manages Rook-Ceph resources, provided by OpenShift Data Foundation, by using storage classes and persistent volumes frameworks. Developers can request persistent volumes by defining persistent volume claims with the desired size and access mode. Once a persistent volume attaches to a persistent volume claim, it can be mounted and used by one or more pods as a regular storage device.

 

References

For more information, refer to the Red Hat OpenShift Container Storage 4.8 Documentation. Red Hat OpenShift Container Storage

For more information, refer to the Red Hat OpenShift Container Platform 4.8 Documentation. Red Hat OpenShift Container Platform

Revision: cl260-5.0-29d2128