In this exercise, you will export an RBD image, then import the image into another cluster.
Outcomes
You should be able to:
Export an entire RBD image.
Import an entire RBD image.
Export the changes applied to an RBD image between two points in time.
Import the changes applied to an RBD image into another RBD image.
As the student user on the workstation machine, use the lab command to prepare your systems for this exercise.
[student@workstation ~]$ lab start block-import
This command confirms that the hosts required for this exercise are accessible.
It also ensures that clienta has the necessary RBD client authentication keys.
Procedure 6.3. Instructions
Open two terminals and log in to clienta and serverf as the admin user.
Verify that both clusters are reachable and have a HEALTH_OK status.
Open a terminal window.
Log in to clienta as the admin user and use sudo to run the cephadm shell.
Verify that the primary cluster is in a healthy state.
[student@workstation ~]$ssh admin@clienta...output omitted... [admin@clienta ~]$sudo cephadm shell[ceph: root@clienta /]#ceph healthHEALTH_OK
Open another terminal window.
Log in to serverf as the admin user and use sudo to run the cephadm shell.
Verify that the secondary cluster is in a healthy state.
[student@workstation ~]$ssh admin@serverf...output omitted... [admin@serverf ~]$sudo cephadm shell[ceph: root@serverf /]#ceph healthHEALTH_OK
Create a pool called rbd, and then enable the rbd client application for the Ceph Block Device and make it usable by the RBD feature.
In the primary cluster, create a pool called rbd with 32 placement groups.
Enable the rbd client application for the Ceph Block Device and make it usable by the RBD feature.
[ceph: root@clienta /]#ceph osd pool create rbd 32pool 'rbd' created [ceph: root@clienta /]#ceph osd pool application enable rbd rbdenabled application 'rbd' on pool 'rbd' [ceph: root@clienta /]#rbd pool init -p rbd
In the secondary cluster, create a pool called rbd with 32 placement groups.
Enable the rbd client application for the Ceph Block Device and make it usable by the RBD feature.
[ceph: root@serverf /]#ceph osd pool create rbd 32pool 'rbd' created [ceph: root@serverf /]#ceph osd pool application enable rbd rbdenabled application 'rbd' on pool 'rbd' [ceph: root@serverf /]#rbd pool init -p rbd
Create an RBD image called rbd/test in your primary Ceph cluster.
Map it as a block device, format it with an XFS file system, mount it in /mnt/rbd directory, copy some data into it, and then unmount it.
Create the RBD image.
Exit the cephadm shell and switch to the root user.
Map the image, and then format it with an XFS file system.
[ceph: root@clienta /]#rbd create test --size 128 --pool rbd[ceph: root@clienta /]#exitexit [admin@clienta ~]$sudo -i[root@clienta ~]#rbd map --pool rbd test/dev/rbd0 [root@clienta ~]#mkfs.xfs /dev/rbd0...output omitted...
Mount /dev/rbd0 to the /mnt/rbd directory and copy a file to it.
[root@clienta ~]#mount /dev/rbd0 /mnt/rbd[root@clienta ~]#mount | grep rbd/dev/rbd0 on /mnt/rbd type xfs (rw,relatime,seclabel,attr2,inode64,...) [root@clienta ~]#cp /etc/ceph/ceph.conf /mnt/rbd/file0[root@clienta ~]#ls /mnt/rbdfile0
Unmount your file system to ensure that the system flushes all data to the Ceph cluster.
[root@clienta ~]# umount /mnt/rbdCreate a backup copy of the primary rbd/test block device.
Export the entire rbd/test image to a file called /mnt/export.dat.
Copy the export.dat file to the secondary cluster.
In the primary cluster, run the cephadm shell using the --mount argument to bind mount the /home/admin/rbd-export/ directory.
[root@clienta ~]# cephadm shell --mount /home/admin/rbd-export/
...output omitted...
[ceph: root@clienta /]#Export the entire rbd/test image to a file called /mnt/export.dat.
Exit the cephadm shell.
[ceph: root@clienta /]#rbd export rbd/test /mnt/export.datExporting image: 100% complete...done. [ceph: root@clienta /]#exitexit [root@clienta ~]$
Copy the export.dat file to the secondary cluster in the /home/admin/rbd-import/ directory.
[root@clienta ~]# rsync -avP /home/admin/rbd-export/export.dat \
serverf:/home/admin/rbd-import
...output omitted...In the secondary cluster, import the /mnt/export.dat file containing the exported rbd/test RBD image into the secondary cluster.
Confirm that the import was successful by mapping the imported image to a block device, mounting it, and inspecting its contents.
Exit the current cepdadm shell.
Use sudo to run the cephadm shell with the --mount argument to bind mount the /home/admin/rbd-import/ directory.
[ceph: root@serverf /]#exit[admin@serverf ~]$sudo cephadm shell --mount /home/admin/rbd-import/[ceph: root@serverf /]#
List the contents of the backup cluster's empty rbd pool.
Use the rbd import command to import the RBD image contained in the /mnt/export.dat file into the backup cluster, referring to it as rbd/test.
[ceph: root@serverf /]#rbd --pool rbd ls[ceph: root@serverf /]#rbd import /mnt/export.dat rbd/testImporting image: 100% complete...done. [ceph: root@serverf /]#rbd du --pool rbd testNAME PROVISIONED USED test 128 MiB 32 MiB
Exit the cephadm shell, and then switch to the root user.
Map the backup cluster's imported RBD image and mount the file system it contains.
Confirm that its contents are the same as those originally created on the primary cluster's RBD image.
[ceph: root@serverf /]#exitexit [admin@serverf ~]$sudo -i[root@serverf ~]#rbd map --pool rbd test/dev/rbd0 [root@serverf ~]#mount /dev/rbd0 /mnt/rbd[root@serverf ~]#mount | grep rbd/dev/rbd0 on /mnt/rbd type xfs (rw,relatime,seclabel,attr2,inode64,...) [root@serverf ~]#df -h /mnt/rbdFilesystem Size Used Avail Use% Mounted on /dev/rbd0 121M 7.8M 113M 7% /mnt/rbd [root@serverf ~]#ls -l /mnt/rbdtotal 4 -rw-r--r--. 1 admin users 177 Sep 30 22:02 file0 [root@serverf ~]#cat /mnt/rbd/file0# minimal ceph.conf for c315020c-21f0-11ec-b6d6-52540000fa0c [global] fsid = c315020c-21f0-11ec-b6d6-52540000fa0c mon_host = [v2:172.25.250.12:3300/0,v1:172.25.250.12:6789/0]
Unmount the file system and unmap the RBD image.
[root@serverf ~]#umount /mnt/rbd[root@serverf ~]#rbd unmap /dev/rbd0
In this part of the exercise, you will create a pair of snapshots of rbd/test on your primary cluster and export the changes between those snapshots as an incremental diff image.
You will then import the changes from the incremental diff into your copy of the rbd/test image on your secondary cluster.
In the primary cluster, run the cephadm shell and create an initial snapshot called rbd/test@firstsnap.
Calculate the provisioned and actual disk usage of the rbd/test image and its associated snapshots.
[root@clienta ~]#cephadm shell...output omitted... [ceph: root@clienta /]#rbd snap create rbd/test@firstsnapCreating snap: 100% complete...done. [ceph: root@clienta /]#rbd du --pool rbd testNAME PROVISIONED USED test@firstsnap 128 MiB 36 MiB test 128 MiB 36 MiB <TOTAL> 128 MiB 72 MiB [ceph: root@clienta /]#exitexit [root@clienta ~]#
In the secondary cluster, run the cephadm shell, create an initial snapshot called rbd/test@firstsnap.
Calculate the provisioned and actual disk usage of the rbd/test image and its associated snapshots.
[root@serverf ~]$cephadm shell...output omitted... [ceph: root@serverf /]#rbd snap create rbd/test@firstsnapCreating snap: 100% complete...done. [ceph: root@serverf /]#rbd du --pool rbd testNAME PROVISIONED USED test@firstsnap 128 MiB 32 MiB test 128 MiB 32 MiB <TOTAL> 128 MiB 64 MiB [ceph: root@serverf /]#exitexit [root@clientf ~]#
In the primary cluster, mount the file system on the /dev/rbd0 device, mapped from the rbd/test image, to change the RBD image.
Make changes to the file system to effect changes to the RBD image.
Unmount the file system when you are finished.
[root@clienta ~]#mount /dev/rbd0 /mnt/rbd[root@clienta ~]#dd if=/dev/zero of=/mnt/rbd/file1 bs=1M count=5...output omitted... [root@clienta ~]#ls -l /mnt/rbd/total 5124 -rw-r--r--. 1 admin users 177 Sep 30 22:02 file0 -rw-r--r--. 1 admin users 5242880 Sep 30 23:15 file1 [root@clienta ~]$umount /mnt/rbd
In the primary cluster, run the cephadm shell and note that the amount of data used in the image of the primary cluster increased.
Create a new snapshot called rbd/test@secondsnap to delimit the ending time window of the changes that you want to export.
Note the adjustments made to the reported used data.
[root@clienta ~]#cephadm shell...output omitted... [ceph: root@clienta /]#rbd du --pool rbd testNAME PROVISIONED USED test@firstsnap 128 MiB 36 MiB test 128 MiB 40 MiB <TOTAL> 128 MiB 76 MiB [ceph: root@clienta /]#rbd snap create rbd/test@secondsnapCreating snap: 100% complete...done. [ceph: root@clienta /]#rbd du --pool rbd testNAME PROVISIONED USED test@firstsnap 128 MiB 36 MiBtest@secondsnap 128 MiB 40 MiBtest 128 MiB12 MiB<TOTAL> 128 MiB 88 MiB
In the primary cluster, exit the current cepdadm shell.
Run the cephadm shell with the --mount argument to bind mount the /home/admin/rbd-import/ directory.
[ceph: root@clienta /]#exitexit [root@clienta ~]#cephadm shell --mount /home/admin/rbd-export/...output omitted... [ceph: root@clienta /]#
Export the changes between the snapshots of the primary cluster's rbd/test image to a file called /mnt/export-diff.dat.
Exit the cephadm shell, and copy the export-diff.dat file to the secondary cluster in the /home/admin/rbd-import/ directory.
[ceph: root@clienta /]#rbd export-diff --from-snap firstsnap \ rbd/test@secondsnap /mnt/export-diff.datExporting image: 100% complete...done. [ceph: root@clienta /]#exitexit [root@clienta ~]$rsync -avP /home/admin/rbd-export/export-diff.dat \ serverf:/home/admin/rbd-import...output omitted...
In the secondary cluster, run the cephadm shell using the --mount argument to mount the /home/admin/rbd-import/ directory.
Use the rbd import-diff command to import the changes to the secondary cluster's copy of the rbd/test image by using the /mnt/export-diff.dat file.
This eliminates the need to save the exported image to a file as an intermediate step.
Inspect the information about the remote RBD image.
Exit the cephadm shell.
[root@serverf ~]#cephadm shell --mount /home/admin/rbd-import[ceph: root@serverf /]#rbd du --pool rbd testNAME PROVISIONED USED test@firstsnap 128 MiB 32 MiB test 128 MiB 32 MiB <TOTAL> 128 MiB 64 MiB [ceph: root@serverf /]#rbd import-diff /mnt/rbd-import/export-diff.dat rbd/testImporting image diff: 100% complete...done. [ceph: root@serverf /]#rbd du --pool rbd testNAME PROVISIONED USED test@firstsnap 128 MiB 32 MiBtest@secondsnap 128 MiB 32 MiBtest 128 MiB8 MiB<TOTAL> 128 MiB 72 MiB [ceph: root@serverf /]#exitexit [root@serverf ~]#
The end snapshot is present on the secondary cluster's RBD image.
The rbd import-diff command automatically creates it.
Verify that the backup cluster's image is identical to the primary cluster's RBD image.
[root@serverf ~]#rbd map --pool rbd test/dev/rbd0 [root@serverf ~]#mount /dev/rbd0 /mnt/rbd[root@serverf ~]#df /mnt/rbdFilesystem 1K-blocks Used Available Use% Mounted on /dev/rbd0 123584 13064 110520 11% /mnt/rbd [root@serverf ~]#ls -l /mnt/rbdtotal 5124 -rw-r--r--. 1 admin users 177 Sep 30 22:02 file0 -rw-r--r--. 1 admin users 5242880 Sep 30 23:15 file1 [root@serverf ~]#umount /mnt/rbd
Clean up your environment.
In the primary cluster, unmap the RBD image.
Run the cephadm shell, purge all existing snapshots on the RBD image of both clusters, and then delete the RBD image.
[root@clienta ~]# rbd unmap /dev/rbd0[root@clienta ~]#cephadm shell...output omitted... [ceph: root@clienta /]#rbd --pool rbd snap purge testRemoving all snapshots: 100% complete...done. [ceph: root@clienta /]#rbd rm test --pool rbdRemoving image: 100% complete...done. [ceph: root@clienta /]#exitexit [root@clienta ~]#
In the secondary cluster, unmap the RBD image.
Run the cephadm shell, purge all existing snapshots on the RBD image of both clusters, and then delete the RBD image.
[root@serverf ~]# rbd unmap /dev/rbd0[root@serverf ~]#cephadm shell...output omitted... [ceph: root@serverf /]#rbd --pool rbd snap purge testRemoving all snapshots: 100% complete...done. [ceph: root@serverf /]#rbd rm test --pool rbdRemoving image: 100% complete...done. [ceph: root@serverf /]#exitexit [root@serverf ~]$
Exit and close the second terminal.
Return to workstation as the student user.
[root@serverf ~]#exit[admin@serverf ~]$exit[student@workstation ~]$exit
[root@clienta ~]#exit[admin@clienta ~]$exit[student@workstation ~]$
This concludes the guided exercise.