Bookmark this page

Guided Exercise: Managing RADOS Block Device Snapshots

In this exercise, you will create and clone a RADOS block device snapshot.

Outcomes

You should be able to create and manage RADOS block device snapshots, as well as clone and create a child image.

As the student user on the workstation machine, use the lab command to prepare your systems for this exercise.

[student@workstation ~]$ lab start block-snapshot

This command confirms that the hosts required for this exercise are accessible. It also creates an image called image1 within the rbd pool. Finally, this command creates a user and associated key in Red Hat Ceph Storage cluster and copies it to the clientb node.

Procedure 6.2. Instructions

  1. Use the ceph health command to verify that the primary cluster is in a healthy state.

    1. Log in to clienta as the admin user and switch to the root user.

      [student@workstation ~]$ ssh admin@clienta
      ...output omitted...
      [admin@clienta ~]$ sudo -i
      [root@clienta ~]#
    2. Use the cephadm shell to run the ceph health command to verify that the primary cluster is in a healthy state.

      [root@clienta ~]# cephadm shell -- ceph health
      HEALTH_OK
  2. Map the rbd/image1 image as a block device, format it with an XFS file system, and confirm that the /dev/rbd0 device is writable.

    1. Map the rbd/image1 image as a block device.

      [root@clienta ~]# rbd map --pool rbd image1
      /dev/rbd0
    2. Format the block device with an XFS file system.

      [root@clienta ~]# mkfs.xfs /dev/rbd0
      ...output omitted...
    3. Confirm that the /dev/rbd0 device is writable.

      [root@clienta ~]# blockdev --getro /dev/rbd0
      0
  3. Create an initial snapshot called firstsnap. Calculate the provisioned and actual disk usage of the rbd/image1 image and its associated snapshots by using the rbd disk-usage command.

    1. Run the cephadm shell. Create an initial snapshot called firstsnap.

      [root@clienta ~]# cephadm shell
      ...output omitted...
      [ceph: root@clienta /]# rbd snap create rbd/image1@firstsnap
      Creating snap: 100% complete...done.
    2. Calculate the provisioned and used size of the rbd/image1 image and its associated snapshots.

      [ceph: root@clienta /]# rbd disk-usage --pool rbd image1
      NAME              PROVISIONED  USED
      image1@firstsnap      128 MiB  36 MiB
      image1                128 MiB  36 MiB
      <TOTAL>               128 MiB  72 MiB
  4. Open another terminal window. Log in to clientb as the admin user and switch to the root user. Set the CEPH_ARGS variable to the '--id=rbd.clientb' value.

    [student@workstation ~]$ ssh admin@clientb
    ...output omitted...
    [admin@clientb ~]$ sudo -i
    [root@clientb ~]# export CEPH_ARGS='--id=rbd.clientb'
  5. On the clientb node, map the image1@firstsnap snapshot and verify that the device is writable.

    1. Map the rbd/image1 image as a block device.

      [root@clientb ~]# rbd map --pool rbd image1@firstsnap
      /dev/rbd0
      [root@clientb ~]# rbd showmapped
      id  pool  namespace  image   snap       device
      0   rbd              image1  firstsnap  /dev/rbd0
    2. Confirm that /dev/rbd0 is a read-only block device.

      [root@clientb ~]# blockdev --getro /dev/rbd0
      1
  6. On the clienta node, exit the cephadm shell. Mount the /dev/rbd0 device in /mnt/image directory, copy some data into it, and then unmount it.

    1. Mount the block device in /mnt/image directory.

      [ceph: root@clienta /]# exit
      [root@clienta ~]# mount /dev/rbd0 /mnt/image
      [root@clienta ~]# mount | grep rbd
      /dev/rbd0 on /mnt/image type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=64k,
      sunit=128,swidth=128,noquota)
    2. Copy some data into /mnt/image directory.

      [root@clienta ~]# cp /etc/ceph/ceph.conf /mnt/image/file0
      [root@clienta ~]# ls /mnt/image/
      file0
    3. Check the disk space usage for the /dev/rbd0 device.

      [root@clienta ~]# df /mnt/image/
      Filesystem     1K-blocks  Used Available Use% Mounted on
      /dev/rbd0         123584  7944    115640   7% /mnt/image
  7. On the clientb node, mount the image1@firstsnap snapshot in /mnt/snapshot directory. Review the disk space usage for the /dev/rbd0 device and list the directory contents. Unmount the /mnt/snapshot directory, and then unmap the /dev/rbd0 device.

    1. Mount the block device in /mnt/snapshot directory.

      [root@clientb ~]# mount /dev/rbd0 /mnt/snapshot/
      mount: /mnt/snapshot: WARNING: device write-protected, mounted read-only.
    2. Check the disk space usage for the /dev/rbd0 device and list the directory content.

      [root@clientb ~]# df /mnt/snapshot/
      Filesystem     1K-blocks  Used Available Use% Mounted on
      /dev/rbd0         123584   480    123104   1% /mnt/snapshot
      [root@clientb ~]# ls -l /mnt/snapshot/
      total 0

      Notice that the file0 file does not display on the clientb node because the file system of the snapshot block device is empty.

      Changes to the original block device did not alter the snapshot.

    3. Unmount the /mnt/snapshot directory, and then unmap the /dev/rbd0 device.

      [root@clientb ~]# umount /mnt/snapshot
      [root@clientb ~]# rbd unmap --pool rbd image1@firstsnap
      [root@clientb ~]# rbd showmapped
  8. On the clienta node, protect the firstsnap snapshot and create a clone called clone1 in the rbd pool. Verify that the child image is created.

    1. Run the cephadm shell. Protect the firstsnap snapshot.

      [root@clienta ~]# cephadm shell
      ...output omitted...
      [ceph: root@clienta /]# rbd snap protect rbd/image1@firstsnap
    2. Clone the firstsnap block device snapshot to create a read or write child image called clone1 that uses the rbd pool.

      [ceph: root@clienta /]# rbd clone rbd/image1@firstsnap rbd/clone1
    3. List the children of the firstsnap snapshot.

      [ceph: root@clienta /]# rbd children rbd/image1@firstsnap
      rbd/clone1
  9. On the clientb node, map the rbd/clone1 image as a block device, mount it, and then copy some content to the clone.

    1. Map the rbd/clone1 image as a block device.

      [root@clientb ~]# rbd map --pool rbd clone1
      /dev/rbd0
    2. Mount the block device in /mnt/clone directory, and then list the directory contents.

      [root@clientb ~]# mount /dev/rbd0 /mnt/clone
      [root@clientb ~]# ls -l /mnt/clone
      total 0
    3. Add some content to the /mnt/clone directory.

      [root@clientb ~]# dd if=/dev/zero of=/mnt/clone/file1 bs=1M count=10
      ...output omitted...
      [root@clientb ~]# ls -l /mnt/clone/
      total 10240
      -rw-r--r--. 1 root root 10485760 Oct 15 00:04 file1
  10. Clean up your environment.

    1. On the clientb node, unmount the file system and unmap the RBD image.

      [root@clientb ~]# umount /mnt/clone
      [root@clientb ~]# rbd unmap --pool rbd clone1
      [root@clientb ~]# rbd showmapped
      [root@clientb ~]# unset CEPH_ARGS
    2. On the clienta node, exit the cephadm shell. Unmount the file system, and then unmap the RBD image.

      [ceph: root@clienta /]# exit
      [root@clienta ~]# umount /mnt/image
      [root@clienta ~]# rbd unmap --pool rbd image1
      [root@clienta ~]# rbd showmapped
  11. Exit and close the second terminal. Return to workstation as the student user.

    [root@clientb ~]# exit
    [admin@clientb ~]$ exit
    [student@workstation ~]$ exit
    [root@clienta ~]# exit
    [admin@clienta ~]$ exit
    [student@workstation ~]$

Finish

On the workstation machine, use the lab command to complete this exercise. This is important to ensure that resources from previous exercises do not impact upcoming exercises.

[student@workstation ~]$ lab finish block-snapshots

This concludes the guided exercise.

Revision: cl260-5.0-29d2128