In this exercise you will configure pool-mode and image-mode RBD mirroring between two Ceph clusters.
Outcomes
You should be able to:
Configure one-way, pool-mode RBD mirroring between two clusters.
Verify the status of the mirroring process between two clusters.
As the student user on the workstation machine, use the lab command to prepare your system for this exercise.
[student@workstation ~]$ lab start mirror-mirrors
This command confirms that the hosts required for this exercise are accessible.
Your Ceph clusters and configuration will not be modified by this lab start command.
Procedure 7.1. Instructions
Open two terminals and log in to clienta and serverf as the admin user.
Verify that both clusters are reachable and have a HEALTH_OK status.
Open a terminal window and log in to clienta, as the admin user and switch to the root user.
Run a cephadm shell.
Verify the health of your production cluster.
[student@workstation ~]$ssh admin@clienta...output omitted... [admin@clienta ~]$sudo -i[root@clienta ~]#cephadm shell[ceph: root@clienta /]#ceph status...output omitted... cluster: id: ff97a876-1fd2-11ec-8258-52540000fa0c health:HEALTH_OKservices:mon: 4 daemons, quorum serverc.lab.example.com,servere,serverd,clienta (age 15m)mgr: serverc.lab.example.com.btgxor(active, since 15m), standbys: servere.fmyxwv, clienta.soxncl, serverd.ufqxxk osd: 9 osds: 9 up (since 15m), 9 in (since 47h) rgw: 2 daemons active (2 hosts, 1 zones) data: pools: 5 pools, 105 pgs objects: 190 objects, 5.3 KiB usage: 147 MiB used, 90 GiB / 90 GiB avail pgs: 105 active+clean
Ensure that the monitor daemons displayed in the services section match those of your 3-node production cluster plus the client.
Open another terminal window and log in to serverf as the admin user and switch to the root user.
Verify the health of your backup cluster.
[student@workstation ~]$ssh admin@serverf...output omitted... [admin@serverf ~]$sudo -i[root@serverf ~]#cephadm shell[ceph: root@clientf /]#ceph status...output omitted... cluster: id: 3c67d550-1fd3-11ec-a0d5-52540000fa0f health:HEALTH_OKservices:mon: 1 daemons, quorum serverf.lab.example.com (age 18m)mgr: serverf.lab.example.com.qfmyuk(active, since 18m) osd: 5 osds: 5 up (since 18m), 5 in (since 47h) rgw: 1 daemon active (1 hosts, 1 zones) data: pools: 5 pools, 105 pgs objects: 189 objects, 4.9 KiB usage: 82 MiB used, 50 GiB / 50 GiB avail pgs: 105 active+clean
Ensure that the monitor daemon displayed in the services section matches that of your single-node backup cluster.
Create a pool called rbd in the production cluster with 32 placement groups.
In the backup cluster, configure a pool to mirror the data from the rbd pool in the production cluster to the backup cluster.
Pool-mode mirroring always mirrors data between two pools that have the same name in both clusters.
In the production cluster, create a pool called rbd with 32 placement groups.
Enable the rbd client application for the Ceph Block Device and make it usable by the RBD feature.
[ceph: root@clienta /]#ceph osd pool create rbd 32 32pool 'rbd' created [ceph: root@clienta /]#ceph osd pool application enable rbd rbdenabled application 'rbd' on pool 'rbd' [ceph: root@clienta /]#rbd pool init -p rbd
In the backup cluster, create a pool called rbd with 32 placement groups.
Enable the rbd client application for the Ceph Block Device and make it usable by the RBD feature.
[ceph: root@serverf /]#ceph osd pool create rbd 32 32pool 'rbd' created [ceph: root@serverf /]#ceph osd pool application enable rbd rbdenabled application 'rbd' on pool 'rbd' [ceph: root@serverf /]#rbd pool init -p rbd
In the production cluster, create a test RBD image and verify it. Enable pool-mode mirroring on the pool.
Create an RBD image called image1 in the rbd pool in the production cluster.
Specify a size of 1024 MB.
Enable the exclusive-lock and journaling RBD image features.
[ceph: root@clienta /]# rbd create image1 \
--size 1024 \
--pool rbd \
--image-feature=exclusive-lock,journalingList the images, and show the information about the image1 image in the rbd pool.
[ceph: root@clienta /]#rbd -p rbd lsimage1 [ceph: root@clienta /]#rbd --image image1 inforbd image 'image1': size 1 GiB in 256 objects order 22 (4 MiB objects) snapshot_count: 0 id: acb0966ee3a0 block_name_prefix: rbd_data.acb0966ee3a0 format: 2 features: exclusive-lock, journaling op_features: flags: create_timestamp: Wed Sep 29 21:14:20 2021 access_timestamp: Wed Sep 29 21:14:20 2021 modify_timestamp: Wed Sep 29 21:14:20 2021 journal: acb0966ee3a0 mirroring state: disabled
Enable pool-mode mirroring on the rbd pool, and verify it.
[ceph: root@clienta /]#rbd mirror pool enable rbd pool[ceph: root@clienta /]#rbd --image image1 inforbd image 'image1': size 1 GiB in 256 objects order 22 (4 MiB objects) snapshot_count: 0 id: acb0966ee3a0 block_name_prefix: rbd_data.acb0966ee3a0 format: 2 features: exclusive-lock, journaling op_features: flags: create_timestamp: Wed Sep 29 21:14:20 2021 access_timestamp: Wed Sep 29 21:14:20 2021 modify_timestamp: Wed Sep 29 21:14:20 2021 journal: acb0966ee3a0mirroring state: enabledmirroring mode: journalmirroring global id: a4610478-807b-4288-9581-241f651d63c3mirroring primary: true
In the production cluster, create a /root/mirror/ directory.
Run the cephadm shell by using the --mount argument to mount the /root/mirror/ directory.
Bootstrap the storage cluster peer and create Ceph user accounts, then save the token in the /mnt/bootstrap_token_prod file in the container.
Copy the bootstrap token file to the backup storage cluster.
On the clienta node, exit the cephadm shell.
Create the /root/mirror/ directory, then run the cephadm shell to bind mount the /root/mirror directory.
[ceph: root@clienta /]#exit[root@clienta ~]#mkdir /root/mirror[root@clienta ~]#cephadm shell --mount /root/mirror/...output omitted... [ceph: root@clienta /]#
Bootstrap the storage cluster peer and save the output in the /mnt/bootstrap_token_prod file.
Name the production cluster prod.
[ceph: root@clienta /]# rbd mirror pool peer bootstrap create \
--site-name prod rbd > /mnt/bootstrap_token_prodExit the cephadm shell to the clienta host system.
Copy the bootstrap token file to the backup storage cluster in the /root directory.
[ceph: root@clienta /]#exitexit [root@clienta ~]#rsync -avP /root/mirror/bootstrap_token_prod \ serverf:/root/bootstrap_token_prod...output omitted...
In the backup cluster, run the cephadm shell with a bind mount of the `/root/bootstrap_token_prod file.
Deploy a rbd-mirror daemon in the serverf node.
Import the bootstrap token.
Verify that the RBD image is present.
On the serverf node, exit the cephadm shell.
Run the cephadm shell again to bind mount the /root/mirror directory.
[ceph: root@serverf /]#exit[root@serverf ~]#cephadm shell --mount /root/bootstrap_token_prod...output omitted... [ceph: root@serverf /]#
Deploy a rbd-mirror daemon, use the argument --placement to set the serverf.lab.example.com node, and then verify it.
[ceph: root@serverf /]#ceph orch apply rbd-mirror \ --placement=serverf.lab.example.comScheduled rbd-mirror update... [ceph: root@serverf /]#ceph orch lsNAME RUNNING REFRESHED AGE PLACEMENT ...output omitted...rbd-mirror1/1 1s ago 6sserverf.lab.example.com...output omitted...
Import the bootstrap token located in the /mnt/bootstrap_token_prod file.
Name the backup cluster bup.
[ceph: root@serverf /]# rbd mirror pool peer bootstrap import \
--site-name bup --direction rx-only rbd /mnt/bootstrap_token_prodIgnore the known error containing the following text: auth: unable to find a keyring on …
Verify that the RBD image is present.
[ceph: root@serverf /]# rbd -p rbd ls
image1Display the pool information and status in both Ceph clusters.
In the production cluster, run the cephadm shell.
Display the pool information and status.
[root@clienta ~]#cephadm shell[ceph: root@clienta /]#rbd mirror pool info rbdMode: pool Site Name: prod Peer Sites: UUID: deacabfb-545f-4f53-9977-ce986d5b93b5 Name: bup Mirror UUID: bec08767-04c7-494e-b01e-9c1a75f9aa0f Direction: tx-only [ceph: root@clienta /]#rbd mirror pool statushealth: UNKNOWN daemon health: UNKNOWN image health: OK images: 1 total 1 replaying
In the backup cluster, display the pool information and status.
[ceph: root@serverf /]#rbd mirror pool info rbdMode: pool Site Name: bup Peer Sites: UUID: 591a4f58-3ac4-47c6-a700-86408ec6d585 Name: prod Direction: rx-only Client: client.rbd-mirror-peer [ceph: root@serverf /]#rbd mirror pool statushealth: OK daemon health: OK image health: OK images: 1 total 1 replaying
Clean up your environment. Delete the RBD image from the production cluster and verify that it is absent from both clusters.
In the production cluster, remove the image1 block device from the rbd pool.
[ceph: root@clienta /]# rbd rm image1 -p rbd
Removing image: 100% complete...done.In the production cluster, list block devices in the rbd pool.
[ceph: root@clienta /]# rbd -p rbd lsIn the backup cluster, list block devices in the rbd pool.
[ceph: root@serverf /]# rbd -p rbd lsExit and close the second terminal. Return to workstation as the student user.
[ceph: root@serverf /]#exit[root@serverf ~]#exit[admin@serverf ~]$exit[student@workstation ~]$exit
[ceph: root@clienta /]#exit[root@clienta ~]#exit[admin@clienta ~]$exit[student@workstation ~]$
This concludes the guided exercise.