In this review, you will configure a Red Hat Ceph Storage cluster for RBD using specified requirements.
Outcomes
You should be able to:
Deploy and configure Red Hat Ceph Storage for RBD mirroring.
Configure a client to access RBD images.
Manage RBD images, RBD mirroring, and RBD snapshots and clones.
If you did not reset your classroom virtual machines at the end of the last chapter, save any work you want to keep from earlier exercises on those machines and reset the classroom environment now.
Reset your environment before performing this exercise. All comprehensive review labs start with a clean, initial classroom environment that includes a pre-built, fully operational Ceph cluster. All remaining comprehensive reviews use the default Ceph cluster provided in the initial classroom environment.
As the student user on the workstation machine, use the lab command to prepare your system for this exercise.
[student@workstation ~]$ lab start comprehensive-review4
This command ensures that production and backups clusters are running and have the RBD storage pools called rbd, rbdpoolmode, and rbdimagemode in both clusters, also creates the data image in the rbd pool in the production cluster.
Specifications
Deploy and configure a Red Hat Ceph Storage cluster for RBD mirrorring between two clusters:
In the production cluster, create an RBD image called vm1 in the rbdpoolmode pool configured as one-way pool-mode and with a size of 128 MiB.
Create an RBD image called vm2 in the rbdimagemode pool configured as one-way image-mode and with a size of 128 MiB.
Both images should be enabled for mirroring.
Production and backup clusters should be called production and bck, respectively.
Map the image called rbd/data using the kernel RBD client on clienta and format the device with an XFS file system.
Store a copy of the /usr/share/dict/words at the root of the file system.
Create a snapshot called beforeprod of the RBD image data, and create a clone called prod1 from the snapshot called beforeprod.
Export the image called data to the /home/admin/cr4/data.img file.
Import it as an image called data to the rbdimagemode pool.
Create a snapshot called beforeprod of the new data image in the rbdimagemode pool.
Map again the image called rbd/data using the kernel RBD client on clienta
Copy the /etc/services file to the root of the file system.
Export changes to the rbd/data image to the /home/admin/cr4/data-diff.img file.
Configure the clienta node so that it will persistently mount the rbd/data RBD image as /mnt/data.
Using two terminals, log in to clienta for the production cluster and serverf for the backup cluster as the admin user.
Verify that each cluster is reachable and has a HEALTH_OK status.
In the first terminal, log in to clienta as the admin user and use sudo to run the cephadm shell.
Verify the health of the production cluster.
[student@workstation ~]$ssh admin@clienta...output omitted... [admin@clienta ~]$sudo cephadm shell...output omitted... [ceph: root@clienta /]#ceph healthHEALTH_OK
In the second terminal, log in to serverf as admin and use sudo to run the cephadm shell.
Verify the health of the backup cluster.
Exit from the cephadm shell.
[student@workstation ~]$ssh admin@serverf...output omitted... [admin@serverf ~]$sudo cephadm shell...output omitted... [ceph: root@serverf /]#ceph healthHEALTH_OK [ceph: root@serverf /]#exit[admin@serverf ~]$
In the production cluster, create the rbdpoolmode/vm1 RBD image, enable one-way pool-mode mirroring on the pool, and view the image information.
Create an RBD image called vm1 in the rbdpoolmode pool in the production cluster.
Specify a size of 128 megabytes, enable exclusive-lock, and journaling RBD image features.
[ceph: root@clienta /]# rbd create vm1 \
--size 128 \
--pool rbdpoolmode \
--image-feature=exclusive-lock,journalingEnable pool-mode mirroring on the rbdpoolmode pool.
[ceph: root@clienta /]# rbd mirror pool enable rbdpoolmode poolView the vm1 image information.
Exit from the cephadm shell.
[ceph: root@clienta /]#rbd info --pool rbdpoolmode vm1rbd image 'vm1': size 128 MiB in 32 objects order 22 (4 MiB objects) snapshot_count: 0 id: ad7c2dd2d3be block_name_prefix: rbd_data.ad7c2dd2d3be format: 2 features: exclusive-lock, journaling op_features: flags: create_timestamp: Tue Oct 26 23:46:28 2021 access_timestamp: Tue Oct 26 23:46:28 2021 modify_timestamp: Tue Oct 26 23:46:28 2021 journal: ad7c2dd2d3be mirroring state:enabledmirroring mode: journal mirroring global id: 6ea4b768-a53d-4195-a1f5-37733eb9af76 mirroring primary: true [ceph: root@clienta /]#exitexit [admin@clienta ~]$
In the production cluster, run the cephadm shell with a bind mount of /home/admin/cr4/.
Bootstrap the storage cluster peer and create Ceph user accounts, and save the token in the /home/admin/cr4/pool_token_prod file in the container.
Name the production cluster prod.
Copy the bootstrap token file to the backup storage cluster.
In the production cluster, use sudo to run the cephadm shell with a bind mount of the /home/admin/cr4/ directory.
[admin@clienta ~]$ sudo cephadm shell --mount /home/admin/cr4/
...output omitted...
[ceph: root@clienta /]#Bootstrap the storage cluster peer, and create Ceph user accounts, save the output in the /mnt/pool_token_prod file.
Name the production cluster prod.
[ceph: root@clienta /]# rbd mirror pool peer bootstrap create \
--site-name prod rbdpoolmode > /mnt/pool_token_prodExit the cephadm shell.
Copy the bootstrap token file to the backup storage cluster in the /home/admin/cr4/ directory.
[ceph: root@clienta /]#exitexit [admin@clienta ~]$sudo rsync -avP /home/admin/cr4/ \ serverf:/home/admin/cr4/...output omitted...
In the backup cluster, run the cephadm shell with a bind mount of /home/admin/cr4/.
Deploy an rbd-mirror daemon in the serverf node.
Import the bootstrap token located in the /home/admin/cr4/ directory.
Name the backup cluster bck.
Verify that the RBD image is present.
In the backup cluster, use sudo to run the cephadm shell with a bind mount of the /home/admin/cr4/ directory.
[admin@serverf ~]$ sudo cephadm shell --mount /home/admin/cr4/
...output omitted...
[ceph: root@serverf /]#Deploy a rbd-mirror daemon, by using the --placement option to select the serverf.lab.example.com node.
Verify the placement.
[ceph: root@serverf /]# ceph orch apply rbd-mirror \
--placement=serverf.lab.example.com
Scheduled rbd-mirror update...[ceph: root@serverf /]#ceph orch ps --format=yaml --service-name=rbd-mirrordaemon_type:rbd-mirrordaemon_id: serverf.hhunqx hostname:serverf.lab.example.com...output omitted...
Import the bootstrap token located in /mnt/pool_token_prod.
Name the backup cluster bck.
[ceph: root@serverf /]# rbd mirror pool peer bootstrap import \
--site-name bck --direction rx-only rbdpoolmode /mnt/pool_token_prodIgnore the known error containing the following text: auth: unable to find a keyring on …
Verify that the RBD image is present. Wait until the RBD image is displayed.
[ceph: root@serverf /]# rbd --pool rbdpoolmode ls
vm1In the production cluster, create the rbdimagemode/vm2 RBD image, enable one-way image-mode mirroring on the pool.
Also, enable mirroring for the vm2 RBD image in the rbdimagemode pool
In the production cluster, use sudo to run the cephadm shell with a bind mount of the /home/admin/cr4/ directory.
[admin@clienta ~]$ sudo cephadm shell --mount /home/admin/cr4/
...output omitted...
[ceph: root@clienta /]#Create an RBD image called vm2 in the rbdimagemode pool in the production cluster.
Specify a size of 128 megabytes, enable exclusive-lock, and journaling RBD image features.
[ceph: root@clienta /]# rbd create vm2 \
--size 128 \
--pool rbdimagemode \
--image-feature=exclusive-lock,journalingEnable image-mode mirroring on the rbdimagemode pool.
[ceph: root@clienta /]# rbd mirror pool enable rbdimagemode imageEnable mirroring for the vm2 RBD image in the rbdimagemode pool.
[ceph: root@clienta /]# rbd mirror image enable rbdimagemode/vm2
Mirroring enabledIn the production cluster, bootstrap the storage cluster peer and create Ceph user accounts, and save the token in the /home/admin/cr4/image_token_prod file in the container.
Copy the bootstrap token file to the backup storage cluster.
Bootstrap the storage cluster peer and create Ceph user accounts, and save the output in the /mnt/image_token_prod file.
[ceph: root@clienta /]# rbd mirror pool peer bootstrap create \
rbdimagemode > /mnt/image_token_prodExit from the cephadm shell.
Copy the bootstrap token file to the backup storage cluster in the /home/admin/cr4/ directory.
[ceph: root@clienta /]#exitexit [admin@clienta ~]$sudo rsync -avP /home/admin/cr4/ \ serverf:/home/admin/cr4/...output omitted...
In the backup cluster, import the bootstrap token. Verify that the RBD image is present.
Import the bootstrap token located in /mnt/image_token_prod.
Name the backup cluster bck.
[ceph: root@serverf /]# rbd mirror pool peer bootstrap import \
--direction rx-only rbdimagemode /mnt/image_token_prodIgnore the known error containing the following text: auth: unable to find a keyring on …
Verify that the RBD image is present. Wait until the RBD image appears.
[ceph: root@serverf /]# rbd --pool rbdimagemode ls
vm2Return to workstation as the student user and Exit the second terminal.
[ceph: root@serverf /]#exitexit [admin@serverf ~]$exit[student@workstation ~]$exit
In the production cluster, map the image called rbd/data using the kernel RBD client on clienta.
Format the device with an XFS file system.
Temporarily mount the file system and store a copy of the /usr/share/dict/words file at the root of the file system.
Unmount and unmap the device when done.
Map the data image in the rbd pool using the kernel RBD client.
[admin@clienta ~]$ sudo rbd map --pool rbd data
/dev/rbd0Format the /dev/rbd0 device with an XFS file system and mount the file system on the /mnt/data directory.
[admin@clienta ~]$sudo mkfs.xfs /dev/rbd0meta-data=/dev/rbd0 isize=512 agcount=8, agsize=4096 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 data = bsize=4096 blocks=32768, imaxpct=25 = sunit=16 swidth=16 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=1872, version=2 = sectsz=512 sunit=16 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 Discarding blocks...Done. [admin@clienta ~]$sudo mount /dev/rbd0 /mnt/data
Copy the /usr/share/dict/words file to the root of the file system, /mnt/data.
List the content to verify the copy.
[admin@clienta ~]$sudo cp /usr/share/dict/words /mnt/data/[admin@clienta ~]$ls /mnt/data/words
Unmount and unmap the /dev/rbd0 device.
[admin@clienta ~]$sudo umount /dev/rbd0[admin@clienta ~]$sudo rbd unmap --pool rbd data
In the production cluster, create a snapshot called beforeprod of the RBD image data.
Create a clone called prod1 from the snapshot called beforeprod.
In the production cluster, use sudo to run the cephadm shell.
Create a snapshot called beforeprod of the RBD image data in the rbd pool.
[admin@clienta ~]$sudo cephadm shell...output omitted... [ceph: root@clienta /]#rbd snap create rbd/data@beforeprodCreating snap: 100% complete...done.
Verify the snapshot by listing the snapshots of the data RBD image in the rbd pool.
[ceph: root@clienta /]#rbd snap list --pool rbd dataSNAPID NAME SIZE PROTECTED TIMESTAMP 4beforeprod128 MiB Thu Oct 28 00:03:08 2021
Protect the beforeprod snapshot and create the clone.
Exit from the cephadm shell.
[ceph: root@clienta /]#rbd snap protect rbd/data@beforeprod[ceph: root@clienta /]#rbd clone rbd/data@beforeprod rbd/prod1[ceph: root@clienta /]#exitexit
Verify that the clone also contains the words file by mapping and mounting the clone image.
Unmount the file system and unmap the device after verification.
[admin@clienta ~]$sudo rbd map --pool rbd prod1/dev/rbd0 [admin@clienta ~]$sudo mount /dev/rbd0 /mnt/data[admin@clienta ~]$ls /mnt/datawords [admin@clienta ~]$sudo umount /mnt/data[admin@clienta ~]$sudo rbd unmap --pool rbd prod1
In the production cluster, export the image called data to the /home/admin/cr4/data.img file.
Import it as an image called data to the rbdimagemode pool.
Create a snapshot called beforeprod of the new data image in the rbdimagemode pool.
In the production cluster, use sudo to run the cephadm shell with a bind mount of the /home/admin/cr4/ directory.
Export the image called data to the /mnt/data.img file.
[admin@clienta ~]$sudo cephadm shell --mount /home/admin/cr4/...output omitted... [ceph: root@clienta /]#rbd export --pool rbd data /mnt/data.imgExporting image: 100% complete...done.
Import the /mnt/data.img file as an image called data to the pool called rbdimagemode.
Verify the import by listing the images in the rbdimagemode pool.
[ceph: root@clienta /]#rbd import /mnt/data.img rbdimagemode/dataImporting image: 100% complete...done. [ceph: root@clienta /]#rbd --pool rbdimagemode lsdatavm2
Create a snapshot called beforeprod of the image called data in the pool called rbdimagemode.
Exit from the cephadm shell.
[ceph: root@clienta /]#rbd snap create rbdimagemode/data@beforeprodCreating snap: 100% complete...done. [ceph: root@clienta /]#exitexit
On the clienta host, use the kernel RBD client to remap and remount the RBD image called data in the pool called rbd.
Copy the /etc/services file to the root of the file system.
Unmount the file system and unmap the device when done.
Map the data image in the rbd pool using the kernel RBD client.
Mount the file system on /mnt/data.
[admin@clienta ~]$sudo rbd map --pool rbd data/dev/rbd0 [admin@clienta ~]$sudo mount /dev/rbd0 /mnt/data
Copy the /etc/services file to the root of the file system, /mnt/data.
List the contents of /mnt/data for verification.
[admin@clienta ~]$sudo cp /etc/services /mnt/data/[admin@clienta ~]$ls /mnt/data/serviceswords
Unmount the file system and unmap the data image in the rbd pool.
[admin@clienta ~]$sudo umount /mnt/data[admin@clienta ~]$sudo rbd unmap --pool rbd data
In the production cluster, export changes to the rbd/data image, after the creation of the beforeprod snapshot, to a file called /home/admin/cr4/data-diff.img.
Import the changes from the /mnt/data-diff.img file to the image called data in the rbdimagemode pool.
In the production cluster, use sudo to run the cephadm shell with a bind mount of the /home/admin/cr4/ directory.
Export changes to the data image in the rbd pool, after the creation of the beforeprod snapshot, to a file called /mnt/token/data-diff.img.
[admin@clienta ~]$sudo cephadm shell --mount /home/admin/cr4/...output omitted... [ceph: root@clienta /]#rbd export-diff \ --from-snap beforeprod rbd/data \ /mnt/data-diff.imgExporting image: 100% complete...done.
Import changes from the /mnt/data-diff.img file to the image called data in the pool called rbdimagemode.
Exit from the cephadm shell.
[ceph: root@clienta /]#rbd import-diff \ /mnt/data-diff.img \ rbdimagemode/dataImporting image diff: 100% complete...done. [ceph: root@clienta /]#exitexit
Verify that the image called data in the pool called rbdimagemode also contains the services file by mapping and mounting the image.
When done, unmount the file system and unmap the image.
[admin@clienta ~]$sudo rbd map rbdimagemode/data/dev/rbd0 [admin@clienta ~]$sudo mount /dev/rbd0 /mnt/data[admin@clienta ~]$ls /mnt/dataserviceswords [admin@clienta ~]$sudo umount /mnt/data[admin@clienta ~]$sudo rbd unmap --pool rbdimagemode data
Configure the clienta host so that it will persistently mount the rbd/data RBD image as /mnt/data.
Authenticate as the admin Ceph user by using existing keys found in the /etc/ceph/ceph.client.admin.keyring file.
Create an entry for rbd/data in the /etc/ceph/rbdmap RBD map file.
The resulting file should have the following contents:
[admin@clienta ~]$cat /etc/ceph/rbdmap# RbdDevice Parameters #poolname/imagename id=client,keyring=/etc/ceph/ceph.client.keyringrbd/data id=admin,keyring=/etc/ceph/ceph.client.admin.keyring
Create an entry for /dev/rbd/rbd/data in the /etc/fstab file.
The resulting file should have the following contents:
[admin@clienta ~]$cat /etc/fstabUUID=d47ead13-ec24-428e-9175-46aefa764b26 / xfs defaults 0 0 UUID=7B77-95E7 /boot/efi vfat defaults,uid=0,gid=0,umask=077,shortname=winnt 0 2/dev/rbd/rbd/data /mnt/data xfs noauto 0 0
Use the rbdmap command to verify your RBD map configuration.
[admin@clienta ~]$sudo rbdmap map[admin@clienta ~]$rbd showmappedid pool namespace image snap device 0 rbd data - /dev/rbd0 [admin@clienta ~]$sudo rbdmap unmap[admin@clienta ~]$rbd showmapped
After you have verified that the RBD mapped devices work, enable the rbdmap service.
Reboot the clienta host to verify that the RBD device mounts persistently.
[admin@clienta ~]$sudo systemctl enable rbdmapCreated symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /usr/lib/systemd/system/rbdmap.service. [admin@clienta ~]$sudo rebootConnection to clienta closed by remote host. Connection to clienta closed.
When clienta finishes rebooting, log in to clienta as the admin user, and verify that it has mounted the RBD device.
[student@workstation ~]$ssh admin@clienta...output omitted... [admin@clienta ~]$df /mnt/dataFilesystem 1K-blocks Used Available Use% Mounted on /dev/rbd0 123584 13460 110124 11% /mnt/data
Return to workstation as the student user.
[admin@clienta ~]$ exit
[student@workstation ~]$This concludes the lab.