Bookmark this page

Lab: Deploying and Configuring Block Storage with RBD

In this review, you will configure a Red Hat Ceph Storage cluster for RBD using specified requirements.

Outcomes

You should be able to:

  • Deploy and configure Red Hat Ceph Storage for RBD mirroring.

  • Configure a client to access RBD images.

  • Manage RBD images, RBD mirroring, and RBD snapshots and clones.

If you did not reset your classroom virtual machines at the end of the last chapter, save any work you want to keep from earlier exercises on those machines and reset the classroom environment now.

Important

Reset your environment before performing this exercise. All comprehensive review labs start with a clean, initial classroom environment that includes a pre-built, fully operational Ceph cluster. All remaining comprehensive reviews use the default Ceph cluster provided in the initial classroom environment.

As the student user on the workstation machine, use the lab command to prepare your system for this exercise.

[student@workstation ~]$ lab start comprehensive-review4

This command ensures that production and backups clusters are running and have the RBD storage pools called rbd, rbdpoolmode, and rbdimagemode in both clusters, also creates the data image in the rbd pool in the production cluster.

Specifications

  • Deploy and configure a Red Hat Ceph Storage cluster for RBD mirrorring between two clusters:

    • In the production cluster, create an RBD image called vm1 in the rbdpoolmode pool configured as one-way pool-mode and with a size of 128 MiB. Create an RBD image called vm2 in the rbdimagemode pool configured as one-way image-mode and with a size of 128 MiB. Both images should be enabled for mirroring.

    • Production and backup clusters should be called production and bck, respectively.

    • Map the image called rbd/data using the kernel RBD client on clienta and format the device with an XFS file system. Store a copy of the /usr/share/dict/words at the root of the file system. Create a snapshot called beforeprod of the RBD image data, and create a clone called prod1 from the snapshot called beforeprod.

    • Export the image called data to the /home/admin/cr4/data.img file. Import it as an image called data to the rbdimagemode pool. Create a snapshot called beforeprod of the new data image in the rbdimagemode pool.

    • Map again the image called rbd/data using the kernel RBD client on clienta Copy the /etc/services file to the root of the file system. Export changes to the rbd/data image to the /home/admin/cr4/data-diff.img file.

    • Configure the clienta node so that it will persistently mount the rbd/data RBD image as /mnt/data.

  1. Using two terminals, log in to clienta for the production cluster and serverf for the backup cluster as the admin user. Verify that each cluster is reachable and has a HEALTH_OK status.

    1. In the first terminal, log in to clienta as the admin user and use sudo to run the cephadm shell. Verify the health of the production cluster.

      [student@workstation ~]$ ssh admin@clienta
      ...output omitted...
      [admin@clienta ~]$ sudo cephadm shell
      ...output omitted...
      [ceph: root@clienta /]# ceph health
      HEALTH_OK
    2. In the second terminal, log in to serverf as admin and use sudo to run the cephadm shell. Verify the health of the backup cluster. Exit from the cephadm shell.

      [student@workstation ~]$ ssh admin@serverf
      ...output omitted...
      [admin@serverf ~]$ sudo cephadm shell
      ...output omitted...
      [ceph: root@serverf /]# ceph health
      HEALTH_OK
      [ceph: root@serverf /]# exit
      [admin@serverf ~]$
  2. In the production cluster, create the rbdpoolmode/vm1 RBD image, enable one-way pool-mode mirroring on the pool, and view the image information.

    1. Create an RBD image called vm1 in the rbdpoolmode pool in the production cluster. Specify a size of 128 megabytes, enable exclusive-lock, and journaling RBD image features.

      [ceph: root@clienta /]# rbd create vm1 \
        --size 128 \
        --pool rbdpoolmode \
        --image-feature=exclusive-lock,journaling
    2. Enable pool-mode mirroring on the rbdpoolmode pool.

      [ceph: root@clienta /]# rbd mirror pool enable rbdpoolmode pool
    3. View the vm1 image information. Exit from the cephadm shell.

      [ceph: root@clienta /]# rbd info --pool rbdpoolmode vm1
      rbd image 'vm1':
      	size 128 MiB in 32 objects
      	order 22 (4 MiB objects)
      	snapshot_count: 0
      	id: ad7c2dd2d3be
      	block_name_prefix: rbd_data.ad7c2dd2d3be
      	format: 2
      	features: exclusive-lock, journaling
      	op_features:
      	flags:
      	create_timestamp: Tue Oct 26 23:46:28 2021
      	access_timestamp: Tue Oct 26 23:46:28 2021
      	modify_timestamp: Tue Oct 26 23:46:28 2021
      	journal: ad7c2dd2d3be
      	mirroring state: enabled
      	mirroring mode: journal
      	mirroring global id: 6ea4b768-a53d-4195-a1f5-37733eb9af76
      	mirroring primary: true
      [ceph: root@clienta /]# exit
      exit
      [admin@clienta ~]$
  3. In the production cluster, run the cephadm shell with a bind mount of /home/admin/cr4/. Bootstrap the storage cluster peer and create Ceph user accounts, and save the token in the /home/admin/cr4/pool_token_prod file in the container. Name the production cluster prod. Copy the bootstrap token file to the backup storage cluster.

    1. In the production cluster, use sudo to run the cephadm shell with a bind mount of the /home/admin/cr4/ directory.

      [admin@clienta ~]$ sudo cephadm shell --mount /home/admin/cr4/
      ...output omitted...
      [ceph: root@clienta /]#
    2. Bootstrap the storage cluster peer, and create Ceph user accounts, save the output in the /mnt/pool_token_prod file. Name the production cluster prod.

      [ceph: root@clienta /]# rbd mirror pool peer bootstrap create \
      --site-name prod rbdpoolmode > /mnt/pool_token_prod
    3. Exit the cephadm shell. Copy the bootstrap token file to the backup storage cluster in the /home/admin/cr4/ directory.

      [ceph: root@clienta /]# exit
      exit
      [admin@clienta ~]$ sudo rsync -avP /home/admin/cr4/ \
      serverf:/home/admin/cr4/
      ...output omitted...
  4. In the backup cluster, run the cephadm shell with a bind mount of /home/admin/cr4/. Deploy an rbd-mirror daemon in the serverf node. Import the bootstrap token located in the /home/admin/cr4/ directory. Name the backup cluster bck. Verify that the RBD image is present.

    1. In the backup cluster, use sudo to run the cephadm shell with a bind mount of the /home/admin/cr4/ directory.

      [admin@serverf ~]$ sudo cephadm shell --mount /home/admin/cr4/
      ...output omitted...
      [ceph: root@serverf /]#
    2. Deploy a rbd-mirror daemon, by using the --placement option to select the serverf.lab.example.com node. Verify the placement.

      [ceph: root@serverf /]# ceph orch apply rbd-mirror \
      --placement=serverf.lab.example.com
      Scheduled rbd-mirror update...
      [ceph: root@serverf /]# ceph orch ps --format=yaml --service-name=rbd-mirror
      daemon_type: rbd-mirror
      daemon_id: serverf.hhunqx
      hostname: serverf.lab.example.com
      ...output omitted...
    3. Import the bootstrap token located in /mnt/pool_token_prod. Name the backup cluster bck.

      [ceph: root@serverf /]# rbd mirror pool peer bootstrap import \
      --site-name bck --direction rx-only rbdpoolmode /mnt/pool_token_prod

      Important

      Ignore the known error containing the following text: auth: unable to find a keyring on …​

    4. Verify that the RBD image is present. Wait until the RBD image is displayed.

      [ceph: root@serverf /]# rbd --pool rbdpoolmode ls
      vm1
  5. In the production cluster, create the rbdimagemode/vm2 RBD image, enable one-way image-mode mirroring on the pool. Also, enable mirroring for the vm2 RBD image in the rbdimagemode pool

    1. In the production cluster, use sudo to run the cephadm shell with a bind mount of the /home/admin/cr4/ directory.

      [admin@clienta ~]$ sudo cephadm shell --mount /home/admin/cr4/
      ...output omitted...
      [ceph: root@clienta /]#
    2. Create an RBD image called vm2 in the rbdimagemode pool in the production cluster. Specify a size of 128 megabytes, enable exclusive-lock, and journaling RBD image features.

      [ceph: root@clienta /]# rbd create vm2 \
      --size 128 \
      --pool rbdimagemode \
      --image-feature=exclusive-lock,journaling
    3. Enable image-mode mirroring on the rbdimagemode pool.

      [ceph: root@clienta /]# rbd mirror pool enable rbdimagemode image
    4. Enable mirroring for the vm2 RBD image in the rbdimagemode pool.

      [ceph: root@clienta /]# rbd mirror image enable rbdimagemode/vm2
      Mirroring enabled
  6. In the production cluster, bootstrap the storage cluster peer and create Ceph user accounts, and save the token in the /home/admin/cr4/image_token_prod file in the container. Copy the bootstrap token file to the backup storage cluster.

    1. Bootstrap the storage cluster peer and create Ceph user accounts, and save the output in the /mnt/image_token_prod file.

      [ceph: root@clienta /]# rbd mirror pool peer bootstrap create \
      rbdimagemode > /mnt/image_token_prod
    2. Exit from the cephadm shell. Copy the bootstrap token file to the backup storage cluster in the /home/admin/cr4/ directory.

      [ceph: root@clienta /]# exit
      exit
      [admin@clienta ~]$ sudo rsync -avP /home/admin/cr4/ \
      serverf:/home/admin/cr4/
      ...output omitted...
  7. In the backup cluster, import the bootstrap token. Verify that the RBD image is present.

    1. Import the bootstrap token located in /mnt/image_token_prod. Name the backup cluster bck.

      [ceph: root@serverf /]# rbd mirror pool peer bootstrap import \
      --direction rx-only rbdimagemode /mnt/image_token_prod

      Important

      Ignore the known error containing the following text: auth: unable to find a keyring on …​

    2. Verify that the RBD image is present. Wait until the RBD image appears.

      [ceph: root@serverf /]# rbd --pool rbdimagemode ls
      vm2
    3. Return to workstation as the student user and Exit the second terminal.

      [ceph: root@serverf /]# exit
      exit
      [admin@serverf ~]$ exit
      [student@workstation ~]$ exit
  8. In the production cluster, map the image called rbd/data using the kernel RBD client on clienta. Format the device with an XFS file system. Temporarily mount the file system and store a copy of the /usr/share/dict/words file at the root of the file system. Unmount and unmap the device when done.

    1. Map the data image in the rbd pool using the kernel RBD client.

      [admin@clienta ~]$ sudo rbd map --pool rbd data
      /dev/rbd0
    2. Format the /dev/rbd0 device with an XFS file system and mount the file system on the /mnt/data directory.

      [admin@clienta ~]$ sudo mkfs.xfs /dev/rbd0
      meta-data=/dev/rbd0              isize=512    agcount=8, agsize=4096 blks
               =                       sectsz=512   attr=2, projid32bit=1
               =                       crc=1        finobt=1, sparse=1, rmapbt=0
               =                       reflink=1
      data     =                       bsize=4096   blocks=32768, imaxpct=25
               =                       sunit=16     swidth=16 blks
      naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
      log      =internal log           bsize=4096   blocks=1872, version=2
               =                       sectsz=512   sunit=16 blks, lazy-count=1
      realtime =none                   extsz=4096   blocks=0, rtextents=0
      Discarding blocks...Done.
      [admin@clienta ~]$ sudo mount /dev/rbd0 /mnt/data
    3. Copy the /usr/share/dict/words file to the root of the file system, /mnt/data. List the content to verify the copy.

      [admin@clienta ~]$ sudo cp /usr/share/dict/words /mnt/data/
      [admin@clienta ~]$ ls /mnt/data/
      words
    4. Unmount and unmap the /dev/rbd0 device.

      [admin@clienta ~]$ sudo umount /dev/rbd0
      [admin@clienta ~]$ sudo rbd unmap --pool rbd data
  9. In the production cluster, create a snapshot called beforeprod of the RBD image data. Create a clone called prod1 from the snapshot called beforeprod.

    1. In the production cluster, use sudo to run the cephadm shell. Create a snapshot called beforeprod of the RBD image data in the rbd pool.

      [admin@clienta ~]$ sudo cephadm shell
      ...output omitted...
      [ceph: root@clienta /]# rbd snap create rbd/data@beforeprod
      Creating snap: 100% complete...done.
    2. Verify the snapshot by listing the snapshots of the data RBD image in the rbd pool.

      [ceph: root@clienta /]# rbd snap list --pool rbd data
      SNAPID  NAME        SIZE     PROTECTED  TIMESTAMP
           4  beforeprod  128 MiB             Thu Oct 28 00:03:08 2021
    3. Protect the beforeprod snapshot and create the clone. Exit from the cephadm shell.

      [ceph: root@clienta /]# rbd snap protect rbd/data@beforeprod
      [ceph: root@clienta /]# rbd clone rbd/data@beforeprod rbd/prod1
      [ceph: root@clienta /]# exit
      exit
    4. Verify that the clone also contains the words file by mapping and mounting the clone image. Unmount the file system and unmap the device after verification.

      [admin@clienta ~]$ sudo rbd map --pool rbd prod1
      /dev/rbd0
      [admin@clienta ~]$ sudo mount /dev/rbd0 /mnt/data
      [admin@clienta ~]$ ls /mnt/data
      words
      [admin@clienta ~]$ sudo umount /mnt/data
      [admin@clienta ~]$ sudo rbd unmap --pool rbd prod1
  10. In the production cluster, export the image called data to the /home/admin/cr4/data.img file. Import it as an image called data to the rbdimagemode pool. Create a snapshot called beforeprod of the new data image in the rbdimagemode pool.

    1. In the production cluster, use sudo to run the cephadm shell with a bind mount of the /home/admin/cr4/ directory. Export the image called data to the /mnt/data.img file.

      [admin@clienta ~]$ sudo cephadm shell --mount /home/admin/cr4/
      ...output omitted...
      [ceph: root@clienta /]# rbd export --pool rbd data /mnt/data.img
      Exporting image: 100% complete...done.
    2. Import the /mnt/data.img file as an image called data to the pool called rbdimagemode. Verify the import by listing the images in the rbdimagemode pool.

      [ceph: root@clienta /]# rbd import /mnt/data.img rbdimagemode/data
      Importing image: 100% complete...done.
      [ceph: root@clienta /]# rbd --pool rbdimagemode ls
      data
      vm2
    3. Create a snapshot called beforeprod of the image called data in the pool called rbdimagemode. Exit from the cephadm shell.

      [ceph: root@clienta /]# rbd snap create rbdimagemode/data@beforeprod
      Creating snap: 100% complete...done.
      [ceph: root@clienta /]# exit
      exit
  11. On the clienta host, use the kernel RBD client to remap and remount the RBD image called data in the pool called rbd. Copy the /etc/services file to the root of the file system. Unmount the file system and unmap the device when done.

    1. Map the data image in the rbd pool using the kernel RBD client. Mount the file system on /mnt/data.

      [admin@clienta ~]$ sudo rbd map --pool rbd data
      /dev/rbd0
      [admin@clienta ~]$ sudo mount /dev/rbd0 /mnt/data
    2. Copy the /etc/services file to the root of the file system, /mnt/data. List the contents of /mnt/data for verification.

      [admin@clienta ~]$ sudo cp /etc/services /mnt/data/
      [admin@clienta ~]$ ls /mnt/data/
      services  words
    3. Unmount the file system and unmap the data image in the rbd pool.

      [admin@clienta ~]$ sudo umount /mnt/data
      [admin@clienta ~]$ sudo rbd unmap --pool rbd data
  12. In the production cluster, export changes to the rbd/data image, after the creation of the beforeprod snapshot, to a file called /home/admin/cr4/data-diff.img. Import the changes from the /mnt/data-diff.img file to the image called data in the rbdimagemode pool.

    1. In the production cluster, use sudo to run the cephadm shell with a bind mount of the /home/admin/cr4/ directory. Export changes to the data image in the rbd pool, after the creation of the beforeprod snapshot, to a file called /mnt/token/data-diff.img.

      [admin@clienta ~]$ sudo cephadm shell --mount /home/admin/cr4/
      ...output omitted...
      [ceph: root@clienta /]# rbd export-diff \
      --from-snap beforeprod rbd/data \
      /mnt/data-diff.img
      Exporting image: 100% complete...done.
    2. Import changes from the /mnt/data-diff.img file to the image called data in the pool called rbdimagemode. Exit from the cephadm shell.

      [ceph: root@clienta /]# rbd import-diff \
      /mnt/data-diff.img \
      rbdimagemode/data
      Importing image diff: 100% complete...done.
      [ceph: root@clienta /]# exit
      exit
    3. Verify that the image called data in the pool called rbdimagemode also contains the services file by mapping and mounting the image. When done, unmount the file system and unmap the image.

      [admin@clienta ~]$ sudo rbd map rbdimagemode/data
      /dev/rbd0
      [admin@clienta ~]$ sudo mount /dev/rbd0 /mnt/data
      [admin@clienta ~]$ ls /mnt/data
      services  words
      [admin@clienta ~]$ sudo umount /mnt/data
      [admin@clienta ~]$ sudo rbd unmap --pool rbdimagemode data
  13. Configure the clienta host so that it will persistently mount the rbd/data RBD image as /mnt/data. Authenticate as the admin Ceph user by using existing keys found in the /etc/ceph/ceph.client.admin.keyring file.

    1. Create an entry for rbd/data in the /etc/ceph/rbdmap RBD map file. The resulting file should have the following contents:

      [admin@clienta ~]$ cat /etc/ceph/rbdmap
      # RbdDevice		Parameters
      #poolname/imagename	id=client,keyring=/etc/ceph/ceph.client.keyring
      rbd/data id=admin,keyring=/etc/ceph/ceph.client.admin.keyring
    2. Create an entry for /dev/rbd/rbd/data in the /etc/fstab file. The resulting file should have the following contents:

      [admin@clienta ~]$ cat /etc/fstab
      UUID=d47ead13-ec24-428e-9175-46aefa764b26	/	xfs	defaults	0	0
      UUID=7B77-95E7	/boot/efi	vfat	defaults,uid=0,gid=0,umask=077,shortname=winnt	0	2
      /dev/rbd/rbd/data /mnt/data xfs noauto 0 0
    3. Use the rbdmap command to verify your RBD map configuration.

      [admin@clienta ~]$ sudo rbdmap map
      [admin@clienta ~]$ rbd showmapped
      id  pool  namespace  image  snap  device
      0   rbd              data   -     /dev/rbd0
      [admin@clienta ~]$ sudo rbdmap unmap
      [admin@clienta ~]$ rbd showmapped
    4. After you have verified that the RBD mapped devices work, enable the rbdmap service. Reboot the clienta host to verify that the RBD device mounts persistently.

      [admin@clienta ~]$ sudo systemctl enable rbdmap
      Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /usr/lib/systemd/system/rbdmap.service.
      [admin@clienta ~]$ sudo reboot
      Connection to clienta closed by remote host.
      Connection to clienta closed.
    5. When clienta finishes rebooting, log in to clienta as the admin user, and verify that it has mounted the RBD device.

      [student@workstation ~]$ ssh admin@clienta
      ...output omitted...
      [admin@clienta ~]$ df /mnt/data
      Filesystem     1K-blocks  Used Available Use% Mounted on
      /dev/rbd0         123584 13460    110124  11% /mnt/data
    6. Return to workstation as the student user.

      [admin@clienta ~]$ exit
      [student@workstation ~]$

Evaluation

Grade your work by running the lab grade comprehensive-review4 command from your workstation machine. Correct any reported failures and rerun the script until successful.

[student@workstation ~]$ lab grade comprehensive-review4

Finish

As the student user on the workstation machine, use the lab command to complete this exercise. This is important to ensure that resources from previous exercises do not impact upcoming exercises.

[student@workstation ~]$ lab finish comprehensive-review4

This concludes the lab.

Revision: cl260-5.0-29d2128