In this lab you will configure Red Hat Ceph Storage to provide block storage to clients using RADOS block devices (RBDs). You will import and export RBD images to and from the Ceph cluster.
Outcomes
You should be able to:
Create and prepare an RBD pool.
Create, manage, and use RBD images.
Export and import RBD images.
As the student user on the workstation machine, use the lab command to prepare your system for this lab.
[student@workstation ~]$ lab start block-review
This command verifies the status of the cluster and creates the rbd pool if it does not already exist.
Procedure 6.4. Instructions
Perform the following steps on your clienta admin node, which is a client node to the primary 3-node Ceph storage cluster.
Log in to clienta as the admin user.
Create a pool called rbd260, enable the rbd client application for the Ceph block device, and make it usable by the RBD feature.
Log in to clienta, as the admin user and use sudo to run the cephadm shell.
Verify that the primary cluster is in a healthy state.
[student@workstation ~]$ssh admin@clienta...output omitted... [admin@clienta ~]$sudo cephadm shell...output omitted... [ceph: root@clienta /]#ceph healthHEALTH_OK
Create a pool called rbd260 with 32 placement groups.
Enable the rbd client application for the Ceph Block Device and make it usable by the RBD feature.
[ceph: root@clienta /]#ceph osd pool create rbd260 32pool 'rbd260' created [ceph: root@clienta /]#ceph osd pool application enable rbd260 rbdenabled application 'rbd' on pool 'rbd260' [ceph: root@clienta /]#rbd pool init -p rbd260
List the rbd260 pool details to verify your work.
The pool ID might be different in your lab environment.
[ceph: root@clienta /]#ceph osd pool ls detail | grep rbd260pool 7 'rbd260' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 203 flags hashpspool,selfmanaged_snaps stripe_width 0application rbd
Create a 128 MiB RADOS block device image called prod260 in the rbd260 pool.
Verify your work.
Map the prod260 RBD image in the rbd260 pool to a local block device file by using the kernel RBD client.
Format the device with an XFS file system.
Mount the file system on the /mnt/prod260 image and copy the /etc/resolv.conf file to the root of this new file system.
When done, unmount and unmap the device.
Exit the cephadm shell, then switch to the root user.
Install the ceph-common package on the clienta node.
Map the prod260 image from the rbd260 pool using the kernel RBD client.
[ceph: root@clienta /]#exitexit [admin@clienta ~]#sudo -i[root@clienta ~]#yum install -y ceph-common...output omitted... Complete! [root@clienta ~]#rbd map --pool rbd260 prod260/dev/rbd0
Format the /dev/rbd0 device with an XFS file system and mount the file system on the device.
Change the user and group ownership of the root directory of the new file system to admin.
[root@clienta ~]#mkfs.xfs /dev/rbd0...output omitted... [root@clienta ~]#mount /dev/rbd0 /mnt/prod260[root@clienta ~]#chown admin:admin /mnt/prod260
Copy the /etc/resolv.conf file to the root of the /mnt/prod260 file system, and then list the contents to verify the copy.
[root@clienta ~]#cp /etc/resolv.conf /mnt/prod260[root@clienta ~]#ls /mnt/prod260/resolv.conf
Unmount and unmap the /dev/rbd0 device.
[root@clienta ~]#umount /dev/rbd0[root@clienta ~]#rbd unmap --pool rbd260 prod260
Create a snapshot of the prod260 RBD image in the rbd260 pool and name it beforeprod.
Run the cephadm shell.
Create the beforeprod snapshot of the prod260 image in the rbd260 pool.
[root@clienta ~]#cephadm shell...output omitted... [ceph: root@clienta /]#rbd snap create rbd260/prod260@beforeprodCreating snap: 100% complete...done.
List the snapshots of the prod260 RBD image in the rbd260 pool to verify your work.
[ceph: root@clienta /]#rbd snap list --pool rbd260 prod260SNAPID NAME SIZE PROTECTED TIMESTAMP 4beforeprod128 MiB Mon Oct 4 17:11:57 2021
The snapshot ID and the time stamp are different in your lab environment.
Export the prod260 RBD image from the rbd260 pool to the /root/prod260.xfs file.
Import that image file into the rbd pool on your primary 3-node Ceph cluster, and name the imported image img260 in that pool.
Export the prod260 RBD image from the rbd260 pool to a file called /root/prod260.xfs.
[ceph: root@clienta /]# rbd export rbd260/prod260 /root/prod260.xfs
Exporting image: 100% complete...done.Retrieve the size of the /home/admin/prod260.xfs file to verify the export.
[ceph: root@clienta /]#ls -lh /root/prod260.xfs-rw-r--r--. 1 root root128MOct 4 17:39 /root/prod260.xfs
Import the /root/prod260.xfs file as the img260 RBD image in the rbd pool.
[ceph: root@clienta /]# rbd import /root/prod260.xfs rbd/img260
Importing image: 100% complete...done.List the images in the rbd pool to verify the import.
Exit from the cephadm shell.
[ceph: root@clienta /]#rbd --pool rbd lsimg260 [ceph: root@clienta /]#exitexit [root@clienta ~]#
The rbd ls command might display images from previous exercises.
Configure the client system so that it persistently mounts the rbd260/prod260 RBD image as /mnt/prod260.
Authenticate as the admin Ceph user using existing keys found in the /etc/ceph/ceph.client.admin.keyring file.
Create an entry for the rbd260/prod260 image in the /etc/ceph/rbdmap RBD map file. The resulting file should have the following contents:
[root@clienta ~]#cat /etc/ceph/rbdmap# RbdDevice Parameters #poolname/imagename id=client,keyring=/etc/ceph/ceph.client.keyringrbd260/prod260 id=admin,keyring=/etc/ceph/ceph.client.admin.keyring
Create an entry for the /dev/rbd/rbd260/prod260 image in the /etc/fstab file. The resulting file should have the following contents:
[root@clienta ~]#cat /etc/fstabUUID=d47ead13-ec24-428e-9175-46aefa764b26 / xfs defaults 0 0 UUID=7B77-95E7 /boot/efi vfat defaults,uid=0,gid=0,umask=077,shortname=winnt 0 2/dev/rbd/rbd260/prod260 /mnt/prod260 xfs noauto 0 0
Use the rbdmap command to validate your RBD map configuration.
[root@clienta ~]#rbdmap map[root@clienta ~]#rbd showmappedid pool namespace image snap device 0 rbd260 prod260 - /dev/rbd0 [root@clienta ~]#rbdmap unmap[root@clienta ~]#rbd showmapped
After you have confirmed that the RBD mapped devices work, enable the rbdmap service.
Reboot the clienta node to confirm that the RBD device mounts persistently.
[root@clienta ~]#systemctl enable rbdmapCreated symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /usr/lib/systemd/system/rbdmap.service. [root@clienta ~]#rebootConnection to clienta closed by remote host. Connection to clienta closed.
After rebooting, log in to the clienta node as the admin user.
Confirm that the system has mounted the RBD device.
[student@workstation ~]$ssh admin@clienta...output omitted... [admin@clienta ~]$df -h /mnt/prod260Filesystem Size Used Avail Use% Mounted on /dev/rbd0 121M 7.8M 113M 7% /mnt/prod260
Return to workstation as the student user.
This concludes the lab.