Bookmark this page

Guided Exercise: Configuring RBD Mirrors

In this exercise you will configure pool-mode and image-mode RBD mirroring between two Ceph clusters.

Outcomes

You should be able to:

  • Configure one-way, pool-mode RBD mirroring between two clusters.

  • Verify the status of the mirroring process between two clusters.

As the student user on the workstation machine, use the lab command to prepare your system for this exercise.

[student@workstation ~]$ lab start mirror-mirrors

This command confirms that the hosts required for this exercise are accessible. Your Ceph clusters and configuration will not be modified by this lab start command.

Procedure 7.1. Instructions

  1. Open two terminals and log in to clienta and serverf as the admin user. Verify that both clusters are reachable and have a HEALTH_OK status.

    1. Open a terminal window and log in to clienta, as the admin user and switch to the root user. Run a cephadm shell. Verify the health of your production cluster.

      [student@workstation ~]$ ssh admin@clienta
      ...output omitted...
      [admin@clienta ~]$ sudo -i
      [root@clienta ~]# cephadm shell
      [ceph: root@clienta /]# ceph status
      ...output omitted...
        cluster:
          id:     ff97a876-1fd2-11ec-8258-52540000fa0c
          health: HEALTH_OK
      
        services:
          mon: 4 daemons, quorum serverc.lab.example.com,servere,serverd,clienta (age 15m)
          mgr: serverc.lab.example.com.btgxor(active, since 15m), standbys: servere.fmyxwv, clienta.soxncl, serverd.ufqxxk
          osd: 9 osds: 9 up (since 15m), 9 in (since 47h)
          rgw: 2 daemons active (2 hosts, 1 zones)
      
        data:
          pools:   5 pools, 105 pgs
          objects: 190 objects, 5.3 KiB
          usage:   147 MiB used, 90 GiB / 90 GiB avail
          pgs:     105 active+clean

      Important

      Ensure that the monitor daemons displayed in the services section match those of your 3-node production cluster plus the client.

    2. Open another terminal window and log in to serverf as the admin user and switch to the root user. Verify the health of your backup cluster.

      [student@workstation ~]$ ssh admin@serverf
      ...output omitted...
      [admin@serverf ~]$ sudo -i
      [root@serverf ~]# cephadm shell
      [ceph: root@clientf /]# ceph status
      ...output omitted...
        cluster:
          id:     3c67d550-1fd3-11ec-a0d5-52540000fa0f
          health: HEALTH_OK
      
        services:
          mon: 1 daemons, quorum serverf.lab.example.com (age 18m)
          mgr: serverf.lab.example.com.qfmyuk(active, since 18m)
          osd: 5 osds: 5 up (since 18m), 5 in (since 47h)
          rgw: 1 daemon active (1 hosts, 1 zones)
      
        data:
          pools:   5 pools, 105 pgs
          objects: 189 objects, 4.9 KiB
          usage:   82 MiB used, 50 GiB / 50 GiB avail
          pgs:     105 active+clean

      Important

      Ensure that the monitor daemon displayed in the services section matches that of your single-node backup cluster.

  2. Create a pool called rbd in the production cluster with 32 placement groups. In the backup cluster, configure a pool to mirror the data from the rbd pool in the production cluster to the backup cluster. Pool-mode mirroring always mirrors data between two pools that have the same name in both clusters.

    1. In the production cluster, create a pool called rbd with 32 placement groups. Enable the rbd client application for the Ceph Block Device and make it usable by the RBD feature.

      [ceph: root@clienta /]# ceph osd pool create rbd 32 32
      pool 'rbd' created
      [ceph: root@clienta /]# ceph osd pool application enable rbd rbd
      enabled application 'rbd' on pool 'rbd'
      [ceph: root@clienta /]# rbd pool init -p rbd
    2. In the backup cluster, create a pool called rbd with 32 placement groups. Enable the rbd client application for the Ceph Block Device and make it usable by the RBD feature.

      [ceph: root@serverf /]# ceph osd pool create rbd 32 32
      pool 'rbd' created
      [ceph: root@serverf /]# ceph osd pool application enable rbd rbd
      enabled application 'rbd' on pool 'rbd'
      [ceph: root@serverf /]# rbd pool init -p rbd
  3. In the production cluster, create a test RBD image and verify it. Enable pool-mode mirroring on the pool.

    1. Create an RBD image called image1 in the rbd pool in the production cluster. Specify a size of 1024 MB. Enable the exclusive-lock and journaling RBD image features.

      [ceph: root@clienta /]# rbd create image1 \
       --size 1024 \
       --pool rbd \
       --image-feature=exclusive-lock,journaling
    2. List the images, and show the information about the image1 image in the rbd pool.

      [ceph: root@clienta /]# rbd -p rbd ls
      image1
      [ceph: root@clienta /]# rbd --image image1 info
      rbd image 'image1':
      	size 1 GiB in 256 objects
      	order 22 (4 MiB objects)
      	snapshot_count: 0
      	id: acb0966ee3a0
      	block_name_prefix: rbd_data.acb0966ee3a0
      	format: 2
      	features: exclusive-lock, journaling
      	op_features:
      	flags:
      	create_timestamp: Wed Sep 29 21:14:20 2021
      	access_timestamp: Wed Sep 29 21:14:20 2021
      	modify_timestamp: Wed Sep 29 21:14:20 2021
      	journal: acb0966ee3a0
      	mirroring state: disabled
    3. Enable pool-mode mirroring on the rbd pool, and verify it.

      [ceph: root@clienta /]# rbd mirror pool enable rbd pool
      [ceph: root@clienta /]# rbd --image image1 info
      rbd image 'image1':
      	size 1 GiB in 256 objects
      	order 22 (4 MiB objects)
      	snapshot_count: 0
      	id: acb0966ee3a0
      	block_name_prefix: rbd_data.acb0966ee3a0
      	format: 2
      	features: exclusive-lock, journaling
      	op_features:
      	flags:
      	create_timestamp: Wed Sep 29 21:14:20 2021
      	access_timestamp: Wed Sep 29 21:14:20 2021
      	modify_timestamp: Wed Sep 29 21:14:20 2021
      	journal: acb0966ee3a0
      	mirroring state: enabled
      	mirroring mode: journal
      	mirroring global id: a4610478-807b-4288-9581-241f651d63c3
      	mirroring primary: true
  4. In the production cluster, create a /root/mirror/ directory. Run the cephadm shell by using the --mount argument to mount the /root/mirror/ directory. Bootstrap the storage cluster peer and create Ceph user accounts, then save the token in the /mnt/bootstrap_token_prod file in the container. Copy the bootstrap token file to the backup storage cluster.

    1. On the clienta node, exit the cephadm shell. Create the /root/mirror/ directory, then run the cephadm shell to bind mount the /root/mirror directory.

      [ceph: root@clienta /]# exit
      [root@clienta ~]# mkdir /root/mirror
      [root@clienta ~]# cephadm shell --mount /root/mirror/
      ...output omitted...
      [ceph: root@clienta /]#
    2. Bootstrap the storage cluster peer and save the output in the /mnt/bootstrap_token_prod file. Name the production cluster prod.

      [ceph: root@clienta /]# rbd mirror pool peer bootstrap create \
       --site-name prod rbd > /mnt/bootstrap_token_prod
    3. Exit the cephadm shell to the clienta host system. Copy the bootstrap token file to the backup storage cluster in the /root directory.

      [ceph: root@clienta /]# exit
      exit
      [root@clienta ~]# rsync -avP /root/mirror/bootstrap_token_prod \
       serverf:/root/bootstrap_token_prod
      ...output omitted...
  5. In the backup cluster, run the cephadm shell with a bind mount of the `/root/bootstrap_token_prod file. Deploy a rbd-mirror daemon in the serverf node. Import the bootstrap token. Verify that the RBD image is present.

    1. On the serverf node, exit the cephadm shell. Run the cephadm shell again to bind mount the /root/mirror directory.

      [ceph: root@serverf /]# exit
      [root@serverf ~]# cephadm shell --mount /root/bootstrap_token_prod
      ...output omitted...
      [ceph: root@serverf /]#
    2. Deploy a rbd-mirror daemon, use the argument --placement to set the serverf.lab.example.com node, and then verify it.

      [ceph: root@serverf /]# ceph orch apply rbd-mirror \
       --placement=serverf.lab.example.com
      Scheduled rbd-mirror update...
      [ceph: root@serverf /]# ceph orch ls
      NAME                     RUNNING  REFRESHED  AGE  PLACEMENT
      ...output omitted...
      rbd-mirror                   1/1  1s ago     6s   serverf.lab.example.com
      ...output omitted...
    3. Import the bootstrap token located in the /mnt/bootstrap_token_prod file. Name the backup cluster bup.

      [ceph: root@serverf /]# rbd mirror pool peer bootstrap import \
       --site-name bup --direction rx-only rbd /mnt/bootstrap_token_prod

      Important

      Ignore the known error containing the following text: auth: unable to find a keyring on …​

    4. Verify that the RBD image is present.

      [ceph: root@serverf /]# rbd -p rbd ls
      image1
  6. Display the pool information and status in both Ceph clusters.

    1. In the production cluster, run the cephadm shell. Display the pool information and status.

      [root@clienta ~]# cephadm shell
      [ceph: root@clienta /]# rbd mirror pool info rbd
      Mode: pool
      Site Name: prod
      
      Peer Sites:
      
      UUID: deacabfb-545f-4f53-9977-ce986d5b93b5
      Name: bup
      Mirror UUID: bec08767-04c7-494e-b01e-9c1a75f9aa0f
      Direction: tx-only
      [ceph: root@clienta /]# rbd mirror pool status
      health: UNKNOWN
      daemon health: UNKNOWN
      image health: OK
      images: 1 total
          1 replaying
    2. In the backup cluster, display the pool information and status.

      [ceph: root@serverf /]# rbd mirror pool info rbd
      Mode: pool
      Site Name: bup
      
      Peer Sites:
      
      UUID: 591a4f58-3ac4-47c6-a700-86408ec6d585
      Name: prod
      Direction: rx-only
      Client: client.rbd-mirror-peer
      [ceph: root@serverf /]# rbd mirror pool status
      health: OK
      daemon health: OK
      image health: OK
      images: 1 total
          1 replaying
  7. Clean up your environment. Delete the RBD image from the production cluster and verify that it is absent from both clusters.

    1. In the production cluster, remove the image1 block device from the rbd pool.

      [ceph: root@clienta /]# rbd rm image1 -p rbd
      Removing image: 100% complete...done.
    2. In the production cluster, list block devices in the rbd pool.

      [ceph: root@clienta /]# rbd -p rbd ls
    3. In the backup cluster, list block devices in the rbd pool.

      [ceph: root@serverf /]# rbd -p rbd ls
  8. Exit and close the second terminal. Return to workstation as the student user.

    [ceph: root@serverf /]# exit
    [root@serverf ~]# exit
    [admin@serverf ~]$ exit
    [student@workstation ~]$ exit
    [ceph: root@clienta /]# exit
    [root@clienta ~]# exit
    [admin@clienta ~]$ exit
    [student@workstation ~]$

Finish

On the workstation machine, use the lab command to complete this exercise. This is important to ensure that resources from previous exercises do not impact upcoming exercises.

[student@workstation ~]$ lab finish mirror-mirrors

This concludes the guided exercise.

Revision: cl260-5.0-29d2128