Bookmark this page

Lab: Expanding Block Storage Operations

In this lab you will configure pool-mode RBD mirroring between two Red Hat Ceph clusters, demote the image on the primary cluster, and promote the image on the secondary cluster.

Outcomes

You should be able to configure two-way pool-mode RBD mirroring between two clusters.

As the student user on the workstation machine, use the lab command to prepare your system for this lab.

[student@workstation ~]$ lab start mirror-review

The lab command confirms that the hosts required for this exercise are accessible. It creates the rbd pool in the primary, and secondary clusters. It also creates an image in primary cluster, called myimage with exclusive-lock and journaling features enabled. Finally, this command creates the /home/admin/mirror-review directory in the primary cluster.

Procedure 7.2. Instructions

  1. Log in to clienta as the admin user. Run the cephadm shell with a bind mount of the /home/admin/mirror-review/ directory. Verify that the primary cluster is in a healthy state. Verify that the rbd pool is created successfully.

    1. Log in to clienta as the admin user and use sudo to run the cephadm shell with a bind mount. Use the ceph health command to verify that the primary cluster is in a healthy state.

      [student@workstation ~]$ ssh admin@clienta
      ...output omitted...
      [admin@clienta ~]$ sudo cephadm shell --mount /home/admin/mirror-review/
      [ceph: root@clienta /]# ceph health
      HEALTH_OK
    2. Verify that the rbd pool and the myimage image are created.

      [ceph: root@clienta /]# ceph osd lspools
      1 device_health_metrics
      2 .rgw.root
      3 default.rgw.log
      4 default.rgw.control
      5 default.rgw.meta
      6 rbd
  2. Deploy the rbd-mirror daemon in the primary and secondary clusters.

    1. On the primary cluster, deploy an rbd-mirror daemon in the serverc.lab.example.com node.

      [ceph: root@clienta /]# ceph orch apply rbd-mirror \
      --placement=serverc.lab.example.com
      Scheduled rbd-mirror update...
    2. Open another terminal window. Log in to serverf as the admin user and use sudo to run a cephadm shell. Use the ceph health command to verify that the primary cluster is in a healthy state.

      [student@workstation ~]$ ssh admin@serverf
      ...output omitted...
      [admin@serverf ~]$ sudo cephadm shell
      ...output omitted...
      [ceph: root@serverf /]# ceph health
      HEALTH_OK
    3. Deploy an rbd-mirror daemon in the serverf.lab.example.com node.

      [ceph: root@serverf /]# ceph orch apply rbd-mirror \
      --placement=serverf.lab.example.com
      Scheduled rbd-mirror update...
  3. Enable pool-mode mirroring on the rbd pool and verify it. Verify that the journaling feature on the myimage image is enabled.

    1. On the primary cluster, enable pool-mode mirroring on the rbd pool and verify it.

      [ceph: root@clienta /]# rbd mirror pool enable rbd pool
      [ceph: root@clienta /]# rbd mirror pool info rbd
      Mode: pool
      Site Name: 2ae6d05a-229a-11ec-925e-52540000fa0c
      
      Peer Sites: none
    2. On the primary cluster, verify the journaling feature on the myimage image.

      [ceph: root@clienta /]# rbd --image myimage info
      rbd image 'myimage':
      	size 512 MiB in 128 objects
      	order 22 (4 MiB objects)
      	snapshot_count: 0
      	id: 8605767b2168
      	block_name_prefix: rbd_data.8605767b2168
      	format: 2
      	features: exclusive-lock, journaling
      	op_features:
      	flags:
      	create_timestamp: Thu Oct 21 13:47:22 2021
      	access_timestamp: Thu Oct 21 13:47:22 2021
      	modify_timestamp: Thu Oct 21 13:47:22 2021
      	journal: 8605767b2168
      	mirroring state: enabled
      	mirroring mode: journal
      	mirroring global id: 33665293-baba-4678-b9f2-ec0b8d1513ea
      	mirroring primary: true
  4. Register the storage cluster peer to the pool, and then copy the bootstrap token file to the secondary cluster.

    1. Bootstrap the storage cluster peer and save the output in the /mnt/bootstrap_token_primary file. Name the production cluster primary.

      [ceph: root@clienta /]# rbd mirror pool peer bootstrap create \
        --site-name primary rbd > /mnt/bootstrap_token_primary
    2. Exit the cephadm shell to the clienta host. Copy the bootstrap token file to the backup storage cluster in the /home/admin directory.

      [ceph: root@clienta /]# exit
      exit
      [admin@clienta ~]$ sudo rsync -avP /home/admin/mirror-review \
        serverf:/home/admin/
      ...output omitted...
  5. In the secondary cluster, import the bootstrap token located in the /home/admin/mirror-review/ directory. Verify that the RBD image is present.

    1. Exit the cephadm shell to the serverf host. Use sudo to run the cephadm shell with a bind mount for the /home/admin/mirror-review/ directory.

      [ceph: root@serverf /]# exit
      exit
      [admin@serverf ~]$ sudo cephadm shell --mount /home/admin/mirror-review/
    2. Import the bootstrap token located in /mnt/bootstrap_token_primary file. Name the backup cluster secondary.

      [ceph: root@serverf /]# rbd mirror pool peer bootstrap import \
      --site-name secondary rbd /mnt/bootstrap_token_primary

      Important

      Ignore the known error containing the following text: auth: unable to find a keyring on …​

    3. Verify that the RBD image is present.

      [ceph: root@serverf /]# rbd --pool rbd ls
      myimage

      The image could take a few minutes to replicate and display in the list.

  6. Verify the mirroring status in both clusters. Note which is the primary image.

    1. On the primary cluster, run the cephadm shell and verify the mirroring status.

      [admin@clienta ~]$ sudo cephadm shell
      [ceph: root@clienta /]# rbd mirror image status rbd/myimage
      myimage:
        global_id:   33665293-baba-4678-b9f2-ec0b8d1513ea
        state:       up+stopped
        description: local image is primary
        service:     serverc.yrgdmc on serverc.lab.example.com
        last_update: 2021-10-21 16:58:20
        peer_sites:
          name: secondary
          state: up+replaying
          description: replaying, {"bytes_per_second":0.0,"entries_behind_primary":0,"entries_per_second":0.0, "non_primary_position":{"entry_tid":3,"object_number":3,"tag_tid":1}, "primary_position":{"entry_tid":3,"object_number":3,"tag_tid":1}}
          last_update: 2021-10-21 16:58:23
    2. On the secondary cluster, verify the mirroring status.

      [ceph: root@serverf /]# rbd mirror image status rbd/myimage
      myimage:
        global_id:   33665293-baba-4678-b9f2-ec0b8d1513ea
        state:       up+replaying
        description: replaying, {"bytes_per_second":0.0,"entries_behind_primary":0,"entries_per_second":0.0, "non_primary_position":{"entry_tid":3,"object_number":3,"tag_tid":1}, "primary_position":{"entry_tid":3,"object_number":3,"tag_tid":1}}
        service:     serverf.kclptt on serverf.lab.example.com
        last_update: 2021-10-21 16:58:23
        peer_sites:
          name: primary
          state: up+stopped
          description: local image is primary
          last_update: 2021-10-21 16:58:20
  7. Demote the primary image and promote the secondary image, and then verify the change.

    1. On the primary cluster, demote the image and verify the change.

      [ceph: root@clienta /]# rbd mirror image demote rbd/myimage
      Image demoted to non-primary
    2. On the secondary cluster, promote the image and verify the change.

      [ceph: root@serverf /]# rbd mirror image promote rbd/myimage
      Image promoted to primary
    3. On the primary cluster, verify the change.

      [ceph: root@clienta /]# rbd mirror image status rbd/myimage
      myimage:
        global_id:   33665293-baba-4678-b9f2-ec0b8d1513ea
        state:       up+replaying
        description: replaying, {"bytes_per_second":1.2,"entries_behind_primary":0,"entries_per_second":0.05, "non_primary_position":{"entry_tid":0,"object_number":0,"tag_tid":3}, "primary_position":{"entry_tid":0,"object_number":0,"tag_tid":3}}
        service:     serverc.yrgdmc on serverc.lab.example.com
        last_update: 2021-10-21 17:27:20
        peer_sites:
          name: secondary
          state: up+stopped
          description: local image is primary
          last_update: 2021-10-21 17:27:23
    4. On the secondary cluster, verify the change. Note that the primary image is now in the secondary cluster, on the serverf server.

      [ceph: root@serverf /]# rbd mirror image status rbd/myimage
      myimage:
        global_id:   33665293-baba-4678-b9f2-ec0b8d1513ea
        state:       up+stopped
        description: local image is primary
        service:     serverf.kclptt on serverf.lab.example.com
        last_update: 2021-10-21 17:28:23
        peer_sites:
          name: primary
          state: up+replaying
          description: replaying, {"bytes_per_second":0.0,"entries_behind_primary":0,"entries_per_second":0.0, "non_primary_position":{"entry_tid":0,"object_number":0,"tag_tid":3}, "primary_position":{"entry_tid":0,"object_number":0,"tag_tid":3}}
          last_update: 2021-10-21 17:28:20
  8. Return to workstation as the student user.

    1. Exit and close the second terminal. Return to workstation as the student user.

      [ceph: root@serverf /]# exit
      [root@serverf ~]# exit
      [admin@serverf ~]$ exit
      [student@workstation ~]$ exit
      [ceph: root@clienta /]# exit
      [root@clienta ~]# exit
      [admin@clienta ~]$ exit
      [student@workstation ~]$

Evaluation

Grade your work by running the lab grade mirror-review command from your workstation machine. Correct any reported failures and rerun the script until successful.

[student@workstation ~]$ lab grade mirror-review

Finish

On the workstation machine, use the lab command to complete this exercise. This is important to ensure that resources from previous exercises do not impact upcoming exercises.

[student@workstation ~]$ lab finish mirror-review

This concludes the lab.

Revision: cl260-5.0-29d2128