In this exercise, you will create and clone a RADOS block device snapshot.
Outcomes
You should be able to create and manage RADOS block device snapshots, as well as clone and create a child image.
As the student user on the workstation machine, use the lab command to prepare your systems for this exercise.
[student@workstation ~]$ lab start block-snapshot
This command confirms that the hosts required for this exercise are accessible.
It also creates an image called image1 within the rbd pool.
Finally, this command creates a user and associated key in Red Hat Ceph Storage cluster and copies it to the clientb node.
Procedure 6.2. Instructions
Use the ceph health command to verify that the primary cluster is in a healthy state.
Log in to clienta as the admin user and switch to the root user.
[student@workstation ~]$ssh admin@clienta...output omitted... [admin@clienta ~]$sudo -i[root@clienta ~]#
Use the cephadm shell to run the ceph health command to verify that the primary cluster is in a healthy state.
[root@clienta ~]# cephadm shell -- ceph health
HEALTH_OKMap the rbd/image1 image as a block device, format it with an XFS file system, and confirm that the /dev/rbd0 device is writable.
Map the rbd/image1 image as a block device.
[root@clienta ~]# rbd map --pool rbd image1
/dev/rbd0Format the block device with an XFS file system.
[root@clienta ~]# mkfs.xfs /dev/rbd0
...output omitted...Confirm that the /dev/rbd0 device is writable.
[root@clienta ~]# blockdev --getro /dev/rbd0
0Create an initial snapshot called firstsnap.
Calculate the provisioned and actual disk usage of the rbd/image1 image and its associated snapshots by using the rbd disk-usage command.
Run the cephadm shell.
Create an initial snapshot called firstsnap.
[root@clienta ~]#cephadm shell...output omitted... [ceph: root@clienta /]#rbd snap create rbd/image1@firstsnapCreating snap: 100% complete...done.
Calculate the provisioned and used size of the rbd/image1 image and its associated snapshots.
[ceph: root@clienta /]# rbd disk-usage --pool rbd image1
NAME PROVISIONED USED
image1@firstsnap 128 MiB 36 MiB
image1 128 MiB 36 MiB
<TOTAL> 128 MiB 72 MiBOpen another terminal window.
Log in to clientb as the admin user and switch to the root user.
Set the CEPH_ARGS variable to the '--id=rbd.clientb' value.
[student@workstation ~]$ssh admin@clientb...output omitted... [admin@clientb ~]$sudo -i[root@clientb ~]#export CEPH_ARGS='--id=rbd.clientb'
On the clientb node, map the image1@firstsnap snapshot and verify that the device is writable.
Map the rbd/image1 image as a block device.
[root@clientb ~]#rbd map --pool rbd image1@firstsnap/dev/rbd0 [root@clientb ~]#rbd showmappedid pool namespace image snap device 0 rbd image1 firstsnap /dev/rbd0
Confirm that /dev/rbd0 is a read-only block device.
[root@clientb ~]# blockdev --getro /dev/rbd0
1On the clienta node, exit the cephadm shell.
Mount the /dev/rbd0 device in /mnt/image directory, copy some data into it, and then unmount it.
Mount the block device in /mnt/image directory.
[ceph: root@clienta /]#exit[root@clienta ~]#mount /dev/rbd0 /mnt/image[root@clienta ~]#mount | grep rbd/dev/rbd0 on /mnt/image type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=64k, sunit=128,swidth=128,noquota)
Copy some data into /mnt/image directory.
[root@clienta ~]#cp /etc/ceph/ceph.conf /mnt/image/file0[root@clienta ~]#ls /mnt/image/file0
Check the disk space usage for the /dev/rbd0 device.
[root@clienta ~]# df /mnt/image/
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/rbd0 123584 7944 115640 7% /mnt/imageOn the clientb node, mount the image1@firstsnap snapshot in /mnt/snapshot directory.
Review the disk space usage for the /dev/rbd0 device and list the directory contents.
Unmount the /mnt/snapshot directory, and then unmap the /dev/rbd0 device.
Mount the block device in /mnt/snapshot directory.
[root@clientb ~]# mount /dev/rbd0 /mnt/snapshot/
mount: /mnt/snapshot: WARNING: device write-protected, mounted read-only.Check the disk space usage for the /dev/rbd0 device and list the directory content.
[root@clientb ~]#df /mnt/snapshot/Filesystem 1K-blocks Used Available Use% Mounted on /dev/rbd0 123584 480 123104 1% /mnt/snapshot [root@clientb ~]#ls -l /mnt/snapshot/total 0
Notice that the file0 file does not display on the clientb node because the file system of the snapshot block device is empty.
Changes to the original block device did not alter the snapshot.
Unmount the /mnt/snapshot directory, and then unmap the /dev/rbd0 device.
[root@clientb ~]#umount /mnt/snapshot[root@clientb ~]#rbd unmap --pool rbd image1@firstsnap[root@clientb ~]#rbd showmapped
On the clienta node, protect the firstsnap snapshot and create a clone called clone1 in the rbd pool.
Verify that the child image is created.
Run the cephadm shell.
Protect the firstsnap snapshot.
[root@clienta ~]#cephadm shell...output omitted... [ceph: root@clienta /]#rbd snap protect rbd/image1@firstsnap
Clone the firstsnap block device snapshot to create a read or write child image called clone1 that uses the rbd pool.
[ceph: root@clienta /]# rbd clone rbd/image1@firstsnap rbd/clone1List the children of the firstsnap snapshot.
[ceph: root@clienta /]# rbd children rbd/image1@firstsnap
rbd/clone1On the clientb node, map the rbd/clone1 image as a block device, mount it, and then copy some content to the clone.
Map the rbd/clone1 image as a block device.
[root@clientb ~]# rbd map --pool rbd clone1
/dev/rbd0Mount the block device in /mnt/clone directory, and then list the directory contents.
[root@clientb ~]#mount /dev/rbd0 /mnt/clone[root@clientb ~]#ls -l /mnt/clonetotal 0
Add some content to the /mnt/clone directory.
[root@clientb ~]#dd if=/dev/zero of=/mnt/clone/file1 bs=1M count=10...output omitted... [root@clientb ~]#ls -l /mnt/clone/total 10240 -rw-r--r--. 1 root root 10485760 Oct 15 00:04 file1
Clean up your environment.
On the clientb node, unmount the file system and unmap the RBD image.
[root@clientb ~]#umount /mnt/clone[root@clientb ~]#rbd unmap --pool rbd clone1[root@clientb ~]#rbd showmapped[root@clientb ~]#unset CEPH_ARGS
On the clienta node, exit the cephadm shell.
Unmount the file system, and then unmap the RBD image.
[ceph: root@clienta /]#exit[root@clienta ~]#umount /mnt/image[root@clienta ~]#rbd unmap --pool rbd image1[root@clienta ~]#rbd showmapped
Exit and close the second terminal.
Return to workstation as the student user.
[root@clientb ~]#exit[admin@clientb ~]$exit[student@workstation ~]$exit
[root@clienta ~]#exit[admin@clienta ~]$exit[student@workstation ~]$
This concludes the guided exercise.