Abstract
| Goal | Expand block storage operations by implementing remote mirroring and the iSCSI Gateway. |
| Objectives |
|
| Sections |
|
| Lab |
Expanding Block Storage Operations |
After completing this section, you should be able to configure an RBD mirror to replicate an RBD block device between two Ceph clusters for disaster recovery purposes.
Red Hat Ceph Storage supports RBD mirroring between two storage clusters. This allows you to automatically replicate RBD images from one Red Hat Ceph Storage cluster to another remote cluster. This mechanism mirrors the source (primary) RBD image and the target (secondary) RBD image over the network using an asynchronous mechanism. If the cluster containing the primary RBD image becomes unavailable, then you can fail over to the secondary RBD image from the remote cluster and restart the applications that use it.
RBD mirroring supports two configurations:
In one-way mode, the RBD images of one cluster are available in read/write mode and the remote cluster contains mirrors. The mirroring agent runs on the remote cluster. This mode enables the configuration of multiple secondary clusters.
In two-way mode, Ceph synchronizes the source and target pairs (primary and secondary). This mode allows replication between only two clusters, and you must configure the mirroring agent on each cluster.
RBD mirroring supports two modes: pool mode and image mode.
In pool mode, Ceph automatically enables mirroring for each RBD image created in the mirrored pool. When you create an image in the pool on the source cluster, Ceph creates a secondary image on the remote cluster.
In image mode, mirroring can be selectively enabled for individual RBD images within the mirrored pool. In this mode, you have to explicitly select the RBD images to replicate between the two clusters.
The RBD images asynchronously mirrored between two Red Hat Ceph Storage clusters have the following modes:
This mode uses the RBD journaling image feature to ensure point-in-time and crash-consistent replication between two Red Hat Ceph Storage clusters. Every writes to the RBD image is first recorded to the associated journal before modifying the actual image. The remote cluster reads from this journal and replays the updates to its local copy of the image.
Snapshot-based mirroring uses periodically scheduled or manually created RBD image mirror snapshots to replicate crash-consistent RBDs images between two Red Hat Ceph Storage clusters.
The remote cluster determines any data or metadata updates between two mirror snapshots and copies the deltas to the image's local copy.
The RBD fast-diff image feature enables the quick determination of updated data blocks without the need to scan the full RBD image.
The complete delta between two snapshots must be synced prior to use during a failover scenario.
Any partially applied set of deltas will be rolled back at the moment of failover.
In case of an inconsistent state between the two peer clusters, the rbd-mirror daemon does not attempt to mirror the image that is causing the inconsistency, use rbd mirror image resync to resynchronize an image.
[ceph: root@node /]# rbd mirror image resync mypool/myimageUse rbd mirror image enable or rbd mirror image disable to enable or disable mirroring on the whole pool in image mode on both peer storage clusters.
[ceph: root@node /]# rbd mirror image enable mypool/myimageTo use the snapshot-based mirroring convert journal-based mirroring to snapshot- based mirroring by disabling mirroring and enabling snapshot.
[ceph: root@node /]# rbd mirror image disable mypool/myimage
Mirroring disabled[ceph: root@node /]# rbd mirror image enable mypool/myimage snapshot
Mirroring enabledAs a storage administrator, you can improve redundancy by mirroring data images between Red Hat Ceph Storage clusters. Ceph block device mirroring provides protection against data loss, such as a site failure.
To achieve RBD mirroring, and enable the rbd-mirror daemon to discover its peer cluster, you must have a registered peer and a created user account.
Red Hat Ceph Storage 5 automates this process by using the rbd mirror pool peer bootstrap create command.
Each instance of the rbd-mirror daemon must connect to both the local and remote Ceph clusters simultaneously.
Also, the network must have sufficient bandwidth between the two data centers to handle the mirroring workload.
The rbd-mirror daemon does not require the source and destination clusters to have unique internal names; both can and should call themselves ceph.
The rbd mirror pool peer bootstrap command utilizes the --site-name option to describe the clusters used by the rbd-mirror daemon.
The following list outlines the steps required to configure mirroring between two clusters, called prod and backup:
Create a pool with the same name in both clusters, prod and backup.
Create or modify the RBD image with the exclusive-lock, and journaling features enabled.
Enable pool-mode or image-mode mirroring on the pool.
In the prod cluster, bootstrap the storage cluster peer and save the bootstrap token
Deploy a rbd-mirror daemon.
For one-way replication the rbd-mirror daemon runs only on the backup cluster.
For two-way replication the rbd-mirror daemon runs on both clusters.
In the backup cluster, import the bootstrap token.
For one-way replication, use the --direction rx-only argument.
In this example, you see the step-by-step instructions needed to configure one-way mirroring with the prod and backup clusters.
[admin@node ~]$ssh admin@prod-node[admin@prod-node ~]#sudo cephadm shell --mount /home/admin/token/[ceph: root@prod-node /]#ceph osd pool create rbd 32 32pool 'rbd' created [ceph: root@prod-node /]#ceph osd pool application enable rbd rbdenabled application 'rbd' on pool 'rbd' [ceph: root@prod-node /]#rbd pool init -p rbd[ceph: root@prod-node /]#rbd create my-image \ --size 1024 \ --pool rbd \ --image-feature=exclusive-lock,journaling[ceph: root@prod-node /]#rbd mirror pool enable rbd pool[ceph: root@prod-node /]#rbd --image my-image inforbd image 'my-image': size 1 GiB in 256 objects order 22 (4 MiB objects) snapshot_count: 0 id: acf674690a0c block_name_prefix: rbd_data.acf674690a0c format: 2 features: exclusive-lock, journaling op_features: flags: create_timestamp: Wed Oct 6 22:07:41 2021 access_timestamp: Wed Oct 6 22:07:41 2021 modify_timestamp: Wed Oct 6 22:07:41 2021 journal: acf674690a0c mirroring state: enabled mirroring mode: journal mirroring global id: d1140b2e-4809-4965-852a-2c21d181819b mirroring primary: true [ceph: root@prod-node /]#rbd mirror pool peer bootstrap create \ --site-name prod rbd > /mnt/bootstrap_token_prod[ceph: root@prod-node /]#exitexit [root@prod-node ~]#rsync -avP /home/admin/token/bootstrap_token_prod \ backup-node:/home/admin/token/bootstrap_token_prod...output omitted... [root@prod-node ~]#exitlogout [admin@node ~]$ssh admin@backup-node[root@backup-node ~]#cephadm shell --mount /home/admin/token/[ceph: root@backup-node /]#ceph osd pool create rbd 32 32pool 'rbd' created [ceph: root@backup-node /]#ceph osd pool application enable rbd rbdenabled application 'rbd' on pool 'rbd' [ceph: root@backup-node /]#rbd pool init -p rbd[ceph: root@backup-node /]#ceph orch apply rbd-mirror \ --placement=backup-node.example.comScheduled rbd-mirror update... [ceph: root@backup-node /]#rbd mirror pool peer bootstrap import \ --site-name backup --direction rx-only rbd /mnt/bootstrap_token_prod[ceph: root@backup-node /]#rbd -p rbd lsmy-image
The backup cluster displays the following pool information and status.
[ceph: root@backup-node /]#rbd mirror pool info rbdMode: pool Site Name: backup Peer Sites: UUID: 5e2f6c8c-a7d9-4c59-8128-d5c8678f9980 Name: prod Direction: rx-only Client: client.rbd-mirror-peer [ceph: root@backup-node /]#rbd mirror pool statushealth: OK daemon health: OK image health: OK images: 1 total 1 replaying
The prod cluster displays the following pool information and status.
[ceph: root@prod-node /]#rbd mirror pool info rbdMode: pool Site Name: prod Peer Sites: UUID: 6c5f860c-b683-44b4-9592-54c8f26ac749 Name: backup Mirror UUID: 7224d1c5-4bd5-4bc3-aa19-e3b34efd8369 Direction: tx-only [ceph: root@prod-node /]#rbd mirror pool statushealth: UNKNOWN daemon health: UNKNOWN image health: OK images: 1 total 1 replaying
In one-way mode, the source cluster is not aware of the state of the replication. The RBD mirroring agent in the target cluster updates the status information.
If the primary RBD image becomes unavailable, then you can use the following steps to enable access to the secondary RBD image:
Stop access to the primary RBD image. This means stopping all applications and virtual machines that are using the image.
Use the rbd mirror image demote command to demote the primary RBD image.pool-name/image-name
Use the rbd mirror image promote command to promote the secondary RBD image.pool-name/image-name
Resume access to the RBD image. Restart the applications and virtual machines.
When a failover after a non-orderly shutdown occurs, you must promote the non-primary images from a Ceph Monitor node in the backup storage cluster.
Use the --force option because the demotion cannot propagate to the primary storage cluster
rbd(8) man page
For more information, refer to the Mirroring Ceph block devices chapter in the Block Device Guide for Red Hat Ceph Storage 5 at https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5/html-single/block_device_guide/index