In this lab you will configure pool-mode RBD mirroring between two Red Hat Ceph clusters, demote the image on the primary cluster, and promote the image on the secondary cluster.
Outcomes
You should be able to configure two-way pool-mode RBD mirroring between two clusters.
As the student user on the workstation machine, use the lab command to prepare your system for this lab.
[student@workstation ~]$ lab start mirror-review
The lab command confirms that the hosts required for this exercise are accessible.
It creates the rbd pool in the primary, and secondary clusters.
It also creates an image in primary cluster, called myimage with exclusive-lock and journaling features enabled.
Finally, this command creates the /home/admin/mirror-review directory in the primary cluster.
Procedure 7.2. Instructions
Log in to clienta as the admin user.
Run the cephadm shell with a bind mount of the /home/admin/mirror-review/ directory.
Verify that the primary cluster is in a healthy state.
Verify that the rbd pool is created successfully.
Log in to clienta as the admin user and use sudo to run the cephadm shell with a bind mount.
Use the ceph health command to verify that the primary cluster is in a healthy state.
[student@workstation ~]$ssh admin@clienta...output omitted... [admin@clienta ~]$sudo cephadm shell --mount /home/admin/mirror-review/[ceph: root@clienta /]#ceph healthHEALTH_OK
Verify that the rbd pool and the myimage image are created.
[ceph: root@clienta /]#ceph osd lspools1 device_health_metrics 2 .rgw.root 3 default.rgw.log 4 default.rgw.control 5 default.rgw.meta 6rbd
Deploy the rbd-mirror daemon in the primary and secondary clusters.
On the primary cluster, deploy an rbd-mirror daemon in the serverc.lab.example.com node.
[ceph: root@clienta /]# ceph orch apply rbd-mirror \
--placement=serverc.lab.example.com
Scheduled rbd-mirror update...Open another terminal window.
Log in to serverf as the admin user and use sudo to run a cephadm shell.
Use the ceph health command to verify that the primary cluster is in a healthy state.
[student@workstation ~]$ssh admin@serverf...output omitted... [admin@serverf ~]$sudo cephadm shell...output omitted... [ceph: root@serverf /]#ceph healthHEALTH_OK
Deploy an rbd-mirror daemon in the serverf.lab.example.com node.
[ceph: root@serverf /]# ceph orch apply rbd-mirror \
--placement=serverf.lab.example.com
Scheduled rbd-mirror update...Enable pool-mode mirroring on the rbd pool and verify it.
Verify that the journaling feature on the myimage image is enabled.
On the primary cluster, enable pool-mode mirroring on the rbd pool and verify it.
[ceph: root@clienta /]#rbd mirror pool enable rbd pool[ceph: root@clienta /]#rbd mirror pool info rbdMode:poolSite Name: 2ae6d05a-229a-11ec-925e-52540000fa0c Peer Sites: none
On the primary cluster, verify the journaling feature on the myimage image.
[ceph: root@clienta /]#rbd --image myimage inforbd image 'myimage': size 512 MiB in 128 objects order 22 (4 MiB objects) snapshot_count: 0 id: 8605767b2168 block_name_prefix: rbd_data.8605767b2168 format: 2 features: exclusive-lock,journalingop_features: flags: create_timestamp: Thu Oct 21 13:47:22 2021 access_timestamp: Thu Oct 21 13:47:22 2021 modify_timestamp: Thu Oct 21 13:47:22 2021 journal: 8605767b2168 mirroring state: enabled mirroring mode: journal mirroring global id: 33665293-baba-4678-b9f2-ec0b8d1513ea mirroring primary: true
Register the storage cluster peer to the pool, and then copy the bootstrap token file to the secondary cluster.
Bootstrap the storage cluster peer and save the output in the /mnt/bootstrap_token_primary file.
Name the production cluster primary.
[ceph: root@clienta /]# rbd mirror pool peer bootstrap create \
--site-name primary rbd > /mnt/bootstrap_token_primaryExit the cephadm shell to the clienta host.
Copy the bootstrap token file to the backup storage cluster in the /home/admin directory.
[ceph: root@clienta /]#exitexit [admin@clienta ~]$sudo rsync -avP /home/admin/mirror-review \ serverf:/home/admin/...output omitted...
In the secondary cluster, import the bootstrap token located in the /home/admin/mirror-review/ directory.
Verify that the RBD image is present.
Exit the cephadm shell to the serverf host.
Use sudo to run the cephadm shell with a bind mount for the /home/admin/mirror-review/ directory.
[ceph: root@serverf /]#exitexit [admin@serverf ~]$sudo cephadm shell --mount /home/admin/mirror-review/
Import the bootstrap token located in /mnt/bootstrap_token_primary file.
Name the backup cluster secondary.
[ceph: root@serverf /]# rbd mirror pool peer bootstrap import \
--site-name secondary rbd /mnt/bootstrap_token_primaryIgnore the known error containing the following text: auth: unable to find a keyring on …
Verify that the RBD image is present.
[ceph: root@serverf /]# rbd --pool rbd ls
myimageThe image could take a few minutes to replicate and display in the list.
Verify the mirroring status in both clusters. Note which is the primary image.
On the primary cluster, run the cephadm shell and verify the mirroring status.
[admin@clienta ~]$sudo cephadm shell[ceph: root@clienta /]#rbd mirror image status rbd/myimagemyimage: global_id: 33665293-baba-4678-b9f2-ec0b8d1513ea state: up+stoppeddescription:local image is primaryservice: serverc.yrgdmc on serverc.lab.example.com last_update: 2021-10-21 16:58:20 peer_sites: name: secondary state: up+replaying description: replaying, {"bytes_per_second":0.0,"entries_behind_primary":0,"entries_per_second":0.0, "non_primary_position":{"entry_tid":3,"object_number":3,"tag_tid":1}, "primary_position":{"entry_tid":3,"object_number":3,"tag_tid":1}} last_update: 2021-10-21 16:58:23
On the secondary cluster, verify the mirroring status.
[ceph: root@serverf /]#rbd mirror image status rbd/myimagemyimage: global_id: 33665293-baba-4678-b9f2-ec0b8d1513ea state: up+replayingdescription:replaying, {"bytes_per_second":0.0,"entries_behind_primary":0,"entries_per_second":0.0, "non_primary_position":{"entry_tid":3,"object_number":3,"tag_tid":1}, "primary_position":{"entry_tid":3,"object_number":3,"tag_tid":1}} service: serverf.kclptt on serverf.lab.example.com last_update: 2021-10-21 16:58:23 peer_sites: name: primary state: up+stopped description: local image is primary last_update: 2021-10-21 16:58:20
Demote the primary image and promote the secondary image, and then verify the change.
On the primary cluster, demote the image and verify the change.
[ceph: root@clienta /]# rbd mirror image demote rbd/myimage
Image demoted to non-primaryOn the secondary cluster, promote the image and verify the change.
[ceph: root@serverf /]# rbd mirror image promote rbd/myimage
Image promoted to primaryOn the primary cluster, verify the change.
[ceph: root@clienta /]#rbd mirror image status rbd/myimagemyimage: global_id: 33665293-baba-4678-b9f2-ec0b8d1513ea state: up+replayingdescription:replaying, {"bytes_per_second":1.2,"entries_behind_primary":0,"entries_per_second":0.05, "non_primary_position":{"entry_tid":0,"object_number":0,"tag_tid":3}, "primary_position":{"entry_tid":0,"object_number":0,"tag_tid":3}} service: serverc.yrgdmc on serverc.lab.example.com last_update: 2021-10-21 17:27:20 peer_sites: name: secondary state: up+stopped description: local image is primary last_update: 2021-10-21 17:27:23
On the secondary cluster, verify the change.
Note that the primary image is now in the secondary cluster, on the serverf server.
[ceph: root@serverf /]#rbd mirror image status rbd/myimagemyimage: global_id: 33665293-baba-4678-b9f2-ec0b8d1513ea state: up+stoppeddescription:local image is primaryservice: serverf.kclptt on serverf.lab.example.com last_update: 2021-10-21 17:28:23 peer_sites: name: primary state: up+replaying description: replaying, {"bytes_per_second":0.0,"entries_behind_primary":0,"entries_per_second":0.0, "non_primary_position":{"entry_tid":0,"object_number":0,"tag_tid":3}, "primary_position":{"entry_tid":0,"object_number":0,"tag_tid":3}} last_update: 2021-10-21 17:28:20
Return to workstation as the student user.
This concludes the lab.