Bookmark this page

Lab: Providing Block Storage Using RADOS Block Devices

In this lab you will configure Red Hat Ceph Storage to provide block storage to clients using RADOS block devices (RBDs). You will import and export RBD images to and from the Ceph cluster.

Outcomes

You should be able to:

  • Create and prepare an RBD pool.

  • Create, manage, and use RBD images.

  • Export and import RBD images.

As the student user on the workstation machine, use the lab command to prepare your system for this lab.

[student@workstation ~]$ lab start block-review

This command verifies the status of the cluster and creates the rbd pool if it does not already exist.

Procedure 6.4. Instructions

Perform the following steps on your clienta admin node, which is a client node to the primary 3-node Ceph storage cluster.

  1. Log in to clienta as the admin user. Create a pool called rbd260, enable the rbd client application for the Ceph block device, and make it usable by the RBD feature.

    1. Log in to clienta, as the admin user and use sudo to run the cephadm shell.

      Verify that the primary cluster is in a healthy state.

      [student@workstation ~]$ ssh admin@clienta
      ...output omitted...
      [admin@clienta ~]$ sudo cephadm shell
      ...output omitted...
      [ceph: root@clienta /]# ceph health
      HEALTH_OK
    2. Create a pool called rbd260 with 32 placement groups. Enable the rbd client application for the Ceph Block Device and make it usable by the RBD feature.

      [ceph: root@clienta /]# ceph osd pool create rbd260 32
      pool 'rbd260' created
      [ceph: root@clienta /]# ceph osd pool application enable rbd260 rbd
      enabled application 'rbd' on pool 'rbd260'
      [ceph: root@clienta /]# rbd pool init -p rbd260
    3. List the rbd260 pool details to verify your work. The pool ID might be different in your lab environment.

      [ceph: root@clienta /]# ceph osd pool ls detail | grep rbd260
      pool 7 'rbd260' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 203 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd
  2. Create a 128 MiB RADOS block device image called prod260 in the rbd260 pool. Verify your work.

    1. Create the 128 MiB prod260 RBD image in the rbd260 pool.

      [ceph: root@clienta /]# rbd create prod260 --size 128 --pool rbd260
    2. List the images in the rbd260 pool to verify the result.

      [ceph: root@clienta /]# rbd ls rbd260
      prod260
  3. Map the prod260 RBD image in the rbd260 pool to a local block device file by using the kernel RBD client. Format the device with an XFS file system. Mount the file system on the /mnt/prod260 image and copy the /etc/resolv.conf file to the root of this new file system. When done, unmount and unmap the device.

    1. Exit the cephadm shell, then switch to the root user. Install the ceph-common package on the clienta node. Map the prod260 image from the rbd260 pool using the kernel RBD client.

      [ceph: root@clienta /]# exit
      exit
      [admin@clienta ~]# sudo -i
      [root@clienta ~]# yum install -y ceph-common
      ...output omitted...
      Complete!
      [root@clienta ~]# rbd map --pool rbd260 prod260
      /dev/rbd0
    2. Format the /dev/rbd0 device with an XFS file system and mount the file system on the device. Change the user and group ownership of the root directory of the new file system to admin.

      [root@clienta ~]# mkfs.xfs /dev/rbd0
      ...output omitted...
      [root@clienta ~]# mount /dev/rbd0 /mnt/prod260
      [root@clienta ~]# chown admin:admin /mnt/prod260
    3. Copy the /etc/resolv.conf file to the root of the /mnt/prod260 file system, and then list the contents to verify the copy.

      [root@clienta ~]# cp /etc/resolv.conf /mnt/prod260
      [root@clienta ~]# ls /mnt/prod260/
      resolv.conf
    4. Unmount and unmap the /dev/rbd0 device.

      [root@clienta ~]# umount /dev/rbd0
      [root@clienta ~]# rbd unmap --pool rbd260 prod260
  4. Create a snapshot of the prod260 RBD image in the rbd260 pool and name it beforeprod.

    1. Run the cephadm shell. Create the beforeprod snapshot of the prod260 image in the rbd260 pool.

      [root@clienta ~]# cephadm shell
      ...output omitted...
      [ceph: root@clienta /]# rbd snap create rbd260/prod260@beforeprod
      Creating snap: 100% complete...done.
    2. List the snapshots of the prod260 RBD image in the rbd260 pool to verify your work.

      [ceph: root@clienta /]# rbd snap list --pool rbd260 prod260
      SNAPID  NAME        SIZE     PROTECTED  TIMESTAMP
           4  beforeprod  128 MiB             Mon Oct  4 17:11:57 2021

      Note

      The snapshot ID and the time stamp are different in your lab environment.

  5. Export the prod260 RBD image from the rbd260 pool to the /root/prod260.xfs file. Import that image file into the rbd pool on your primary 3-node Ceph cluster, and name the imported image img260 in that pool.

    1. Export the prod260 RBD image from the rbd260 pool to a file called /root/prod260.xfs.

      [ceph: root@clienta /]# rbd export rbd260/prod260 /root/prod260.xfs
      Exporting image: 100% complete...done.
    2. Retrieve the size of the /home/admin/prod260.xfs file to verify the export.

      [ceph: root@clienta /]# ls -lh /root/prod260.xfs
      -rw-r--r--. 1 root root 128M Oct  4 17:39 /root/prod260.xfs
    3. Import the /root/prod260.xfs file as the img260 RBD image in the rbd pool.

      [ceph: root@clienta /]# rbd import /root/prod260.xfs rbd/img260
      Importing image: 100% complete...done.
    4. List the images in the rbd pool to verify the import. Exit from the cephadm shell.

      [ceph: root@clienta /]# rbd --pool rbd ls
      img260
      [ceph: root@clienta /]# exit
      exit
      [root@clienta ~]#

      Note

      The rbd ls command might display images from previous exercises.

  6. Configure the client system so that it persistently mounts the rbd260/prod260 RBD image as /mnt/prod260. Authenticate as the admin Ceph user using existing keys found in the /etc/ceph/ceph.client.admin.keyring file.

    1. Create an entry for the rbd260/prod260 image in the /etc/ceph/rbdmap RBD map file. The resulting file should have the following contents:

      [root@clienta ~]# cat /etc/ceph/rbdmap
      # RbdDevice             Parameters
      #poolname/imagename     id=client,keyring=/etc/ceph/ceph.client.keyring
      rbd260/prod260          id=admin,keyring=/etc/ceph/ceph.client.admin.keyring
    2. Create an entry for the /dev/rbd/rbd260/prod260 image in the /etc/fstab file. The resulting file should have the following contents:

      [root@clienta ~]# cat /etc/fstab
      UUID=d47ead13-ec24-428e-9175-46aefa764b26       /       xfs     defaults 0 0
      UUID=7B77-95E7  /boot/efi       vfat    defaults,uid=0,gid=0,umask=077,shortname=winnt 0 2
      /dev/rbd/rbd260/prod260 /mnt/prod260    xfs     noauto  0 0
    3. Use the rbdmap command to validate your RBD map configuration.

      [root@clienta ~]# rbdmap map
      [root@clienta ~]# rbd showmapped
      id  pool    namespace  image    snap  device
      0   rbd260             prod260  -     /dev/rbd0
      [root@clienta ~]# rbdmap unmap
      [root@clienta ~]# rbd showmapped
    4. After you have confirmed that the RBD mapped devices work, enable the rbdmap service. Reboot the clienta node to confirm that the RBD device mounts persistently.

      [root@clienta ~]# systemctl enable rbdmap
      Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /usr/lib/systemd/system/rbdmap.service.
      [root@clienta ~]# reboot
      Connection to clienta closed by remote host.
      Connection to clienta closed.
    5. After rebooting, log in to the clienta node as the admin user. Confirm that the system has mounted the RBD device.

      [student@workstation ~]$ ssh admin@clienta
      ...output omitted...
      [admin@clienta ~]$ df -h /mnt/prod260
      Filesystem      Size  Used Avail Use% Mounted on
      /dev/rbd0       121M  7.8M  113M   7% /mnt/prod260
  7. Return to workstation as the student user.

    1. Return to workstation as the student user.

      [admin@clienta ~]$ exit
      [student@workstation ~]$

Evaluation

Grade your work by running the lab grade block-review command from your workstation machine. Correct any reported failures and rerun the script until successful.

[student@workstation ~]$ lab grade block-review

Finish

On the workstation machine, use the lab command to complete this exercise. This is important to ensure that resources from previous exercises do not impact upcoming exercises.

[student@workstation ~]$ lab finish block-review

This concludes the lab.

Revision: cl260-5.0-29d2128