In this exercise, you will create and configure RADOS block devices.
Outcomes
You should be able to create and manage RADOS block device images and use them as regular block devices.
As the student user on the workstation machine, use the lab command to prepare your system for this exercise.
This command confirms that the hosts required for this exercise are accessible.
[student@workstation ~]$ lab start block-devices
Procedure 6.1. Instructions
Verify that the Red Hat Ceph cluster is in a healthy state.
Log in to clienta as the admin user and use sudo to run the cephadm shell.
[student@workstation ~]$ssh admin@clienta[admin@clienta ~]$sudo cephadm shell[ceph: root@clienta /]#
Verify that the cluster status is HEALTH_OK.
[ceph: root@clienta /]#ceph healthHEALTH_OK
Create a replicated pool called test_pool with 32 placement groups.
Set the application type for the pool to rbd.
Create the pool test_pool.
[ceph: root@clienta /]# ceph osd pool create test_pool 32 32
pool 'test_pool' createdSet rbd as the application type for the pool.
[ceph: root@clienta /]# rbd pool init test_poolList the configured pools and view the usage and availability for test_pool.
The ID of test_pool might be different in your lab environment.
[ceph: root@clienta /]# ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 90 GiB 90 GiB 136 MiB 136 MiB 0.15
TOTAL 90 GiB 90 GiB 136 MiB 136 MiB 0.15
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
...output omitted...
test_pool 6 32 9 B 1 8 KiB 0 28 GiBCreate a dedicated Ceph user called client.test_pool.clientb.
Configure the clienta node with the user's key-ring file to access the RBD pool containing the RBD images.
Create the client.test_pool.clientb user and display the new file.
[ceph: root@clienta /]# ceph auth get-or-create client.test_pool.clientb \
mon 'profile rbd' osd 'profile rbd' \
-o /etc/ceph/ceph.client.test_pool.clientb.keyring[ceph: root@clienta /]# cat /etc/ceph/ceph.client.test_pool.clientb.keyring
[client.test_pool.clientb]
key = AQAxBE1h79iUNhAAnBWswmX1Wk1Dhlq4a3U61Q==Verify that the client.test_pool.clientb user now exists.
[ceph: root@clienta /]# ceph auth get client.test_pool.clientb
[client.test_pool.clientb]
key = AQAxBE1h79iUNhAAnBWswmX1Wk1Dhlq4a3U61Q==
caps mon = "profile rbd"
caps osd = "profile rbd"
exported keyring for client.test_pool.clientbOpen a second terminal window and log in to the clientb node as the admin user.
Copy the key-ring file for the new test_pool user.
Use the client.test_pool.clientb user name when connecting to the cluster.
Log in to clientb as the admin user and switch to the root user.
[student@workstation ~]$ssh admin@clientb[admin@clientb ~]$sudo -i[root@clientb ~]#
Install the ceph-common package on the clientb node.
[root@clientb ~]# yum install -y ceph-common
...output omitted...
Complete!Go to the first terminal and copy the Ceph configuration and the key-ring files from the /etc/ceph/ directory on the clienta node to the /etc/ceph/ directory on the clientb node.
[ceph: root@clienta /]#scp \ /etc/ceph/{ceph.conf,ceph.client.test_pool.clientb.keyring} \ root@clientb:/etc/cephroot@clientb's password:redhat...output omitted...
Go to the second terminal window.
Temporarily set the default user ID used for connections to the cluster to test_pool.clientb.
[root@clientb ~]# export CEPH_ARGS='--id=test_pool.clientb'Verify that you can connect to the cluster.
[root@clientb ~]# rbd ls test_poolCreate a new RADOS Block Device Image and map it to the clientb machine.
Create an RBD image called test in the test_pool pool. Specify a size of 128 megabytes.
[root@clientb ~]# rbd create test_pool/test --size=128M[root@clientb ~]# rbd ls test_pool
testVerify the parameters of the RBD image.
[root@clientb ~]# rbd info test_pool/test
rbd image 'test':
size 128 MiB in 32 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 867cba5c2d68
block_name_prefix: rbd_data.867cba5c2d68
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
op_features:
flags:
create_timestamp: Thu Sep 23 18:54:35 2021
access_timestamp: Thu Sep 23 18:54:35 2021
modify_timestamp: Thu Sep 23 18:54:35 2021Map the RBD image on the clientb node by using the kernel RBD client.
[root@clientb ~]# rbd map test_pool/test
/dev/rbd0[root@clientb ~]# rbd showmapped
id pool namespace image snap device
0 test_pool test - /dev/rbd0Verify that you can use the RBD image mapped on the clientb node like a regular disk block device.
Format the device with an XFS file system.
[root@clientb ~]# mkfs.xfs /dev/rbd0
meta-data=/dev/rbd0 isize=512 agcount=8, agsize=4096 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=32768, imaxpct=25
= sunit=16 swidth=16 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=1872, version=2
= sectsz=512 sunit=16 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Discarding blocks...Done.Create a mount point for the file system.
[root@clientb ~]# mkdir /mnt/rbdMount the file system created on the /dev/rbd0 device.
[root@clientb ~]# mount /dev/rbd0 /mnt/rbdChange the ownership of the mount point.
[root@clientb ~]# chown admin:admin /mnt/rbdReview the file-system usage.
[root@clientb ~]# df /mnt/rbd
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/rbd0 123584 7940 115644 7% /mnt/rbdAdd some content to the file system.
[root@clientb ~]# dd if=/dev/zero of=/mnt/rbd/test1 bs=10M count=1
1+0 records in
1+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.00838799 s, 1.3 GB/s[root@clientb ~]# ls /mnt/rbd
test1Review the file-system usage.
[root@clientb ~]# df /mnt/rbd
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/rbd0 123584 18180 105404 15% /mnt/rbdReview the content of the cluster.
[root@clientb ~]# ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 90 GiB 90 GiB 158 MiB 158 MiB 0.17
TOTAL 90 GiB 90 GiB 158 MiB 158 MiB 0.17
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
...output omitted...
test_pool 6 32 2.5 MiB 14 7.5 MiB 0 28 GiBUnmount the file system and unmap the RBD image on the clientb node.
[root@clientb ~]#umount /mnt/rbd[root@clientb ~]#rbd unmap /dev/rbd0[root@clientb ~]#rbd showmapped
Configure the client system so that it persistently mounts the test_pool/test RBD image as /mnt/rbd.
Create a single-line entry for test_pool/test in the /etc/ceph/rbdmap RBD map file.
The RBD mount daemon should authenticate as the test_pool.clientb user using the appropriate key-ring file.
The resulting file should have the following contents:
[root@clientb ~]#cat /etc/ceph/rbdmap# RbdDevice Parameters #poolname/imagename id=client,keyring=/etc/ceph/ceph.client.keyringtest_pool/test id=test_pool.clientb,keyring=/etc/ceph/ceph.client.test_pool.clientb.keyring
Create an entry for /dev/rbd/test_pool/test in the /etc/fstab file.
The resulting file should have the following contents:
[root@clientb ~]#cat /etc/fstabUUID=fe1e8b67-e41b-44b8-bcfe-e0ec966784ac / xfs defaults 0 0 UUID=F537-0F4F /boot/efi vfat defaults,uid=0,gid=0,umask=077,shortname=winnt 0 2/dev/rbd/test_pool/test /mnt/rbd xfs noauto 0 0
Verify your RBD map configuration.
Use the rbdmap command to map and unmap configured RBD devices.
[root@clientb ~]#rbdmap map[root@clientb ~]#rbd showmappedid pool namespace image snap device 0 test_pool test - /dev/rbd0 [root@clientb ~]#rbdmap unmap[root@clientb ~]#rbd showmapped
After you have verified that the RBD mapped devices work, enable the rbdmap service.
Reboot the clientb node to verify that the RBD device mounts persistently.
[root@clientb ~]# systemctl enable rbdmap
Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /usr/lib/systemd/system/rbdmap.service.[root@clientb ~]# reboot
Connection to clientb closed by remote host.
Connection to clientb closed.When the clientb node finishes rebooting, log in and verify that it has mounted the RBD device.
[student@workstation ~]$ssh admin@clientb[admin@clientb ~]$df /mnt/rbdFilesystem 1K-blocks Used Available Use% Mounted on /dev/rbd0 123584 18180 105404 15% /mnt/rbd
Unmount your file system, unmap and delete the test_pool/test RBD image, and delete the temporary objects to clean up your environment.
Unmount the /mnt/rbd file system and unmap the RBD image.
[admin@clientb ~]$sudo -i[root@clientb ~]#rbdmap unmap[root@clientb ~]#df | grep rbd[root@clientb ~]#rbd showmapped
Remove the RBD entry from the /etc/fstab file.
The resulting file should contain the following:
[root@clientb ~]# cat /etc/fstab
UUID=fe1e8b67-e41b-44b8-bcfe-e0ec966784ac / xfs defaults 0 0
UUID=F537-0F4F /boot/efi vfat defaults,uid=0,gid=0,umask=077,shortname=winnt 0 2Remove the RBD map entry for test_pool/test from the /etc/ceph/rbdmap file.
The resulting file should contain the following:
[root@clientb ~]# cat /etc/ceph/rbdmap
# RbdDevice Parameters
#poolname/imagename id=client,keyring=/etc/ceph/ceph.client.keyringDelete the test RBD image.
[root@clientb ~]# rbd rm test_pool/test --id test_pool.clientb
Removing image: 100% complete...done.Verify that the test_pool RBD pool does not yet contain extra data.
The test_pool pool should initially contain the three list objects.
[root@clientb ~]# rados -p test_pool ls --id test_pool.clientb
rbd_directory
rbd_info
rbd_trashExit and close the second terminal.
In the first terminal, return to workstation as the student user.
[root@clientb ~]#exit[admin@clientb ~]$exit[student@workstation ~]$exit
[ceph: root@clienta /]#exitexit [root@clienta ~]#exit[admin@clienta ~]$exit[student@workstation ~]$
This concludes the guided exercise.