Bookmark this page

Guided Exercise: Importing and Exporting RBD Images

In this exercise, you will export an RBD image, then import the image into another cluster.

Outcomes

You should be able to:

  • Export an entire RBD image.

  • Import an entire RBD image.

  • Export the changes applied to an RBD image between two points in time.

  • Import the changes applied to an RBD image into another RBD image.

As the student user on the workstation machine, use the lab command to prepare your systems for this exercise.

[student@workstation ~]$ lab start block-import

This command confirms that the hosts required for this exercise are accessible. It also ensures that clienta has the necessary RBD client authentication keys.

Procedure 6.3. Instructions

  1. Open two terminals and log in to clienta and serverf as the admin user. Verify that both clusters are reachable and have a HEALTH_OK status.

    1. Open a terminal window. Log in to clienta as the admin user and use sudo to run the cephadm shell. Verify that the primary cluster is in a healthy state.

      [student@workstation ~]$ ssh admin@clienta
      ...output omitted...
      [admin@clienta ~]$ sudo cephadm shell
      [ceph: root@clienta /]# ceph health
      HEALTH_OK
    2. Open another terminal window. Log in to serverf as the admin user and use sudo to run the cephadm shell. Verify that the secondary cluster is in a healthy state.

      [student@workstation ~]$ ssh admin@serverf
      ...output omitted...
      [admin@serverf ~]$ sudo cephadm shell
      [ceph: root@serverf /]# ceph health
      HEALTH_OK
  2. Create a pool called rbd, and then enable the rbd client application for the Ceph Block Device and make it usable by the RBD feature.

    1. In the primary cluster, create a pool called rbd with 32 placement groups. Enable the rbd client application for the Ceph Block Device and make it usable by the RBD feature.

      [ceph: root@clienta /]# ceph osd pool create rbd 32
      pool 'rbd' created
      [ceph: root@clienta /]# ceph osd pool application enable rbd rbd
      enabled application 'rbd' on pool 'rbd'
      [ceph: root@clienta /]# rbd pool init -p rbd
    2. In the secondary cluster, create a pool called rbd with 32 placement groups. Enable the rbd client application for the Ceph Block Device and make it usable by the RBD feature.

      [ceph: root@serverf /]# ceph osd pool create rbd 32
      pool 'rbd' created
      [ceph: root@serverf /]# ceph osd pool application enable rbd rbd
      enabled application 'rbd' on pool 'rbd'
      [ceph: root@serverf /]# rbd pool init -p rbd
  3. Create an RBD image called rbd/test in your primary Ceph cluster. Map it as a block device, format it with an XFS file system, mount it in /mnt/rbd directory, copy some data into it, and then unmount it.

    1. Create the RBD image. Exit the cephadm shell and switch to the root user. Map the image, and then format it with an XFS file system.

      [ceph: root@clienta /]# rbd create test --size 128 --pool rbd
      [ceph: root@clienta /]# exit
      exit
      [admin@clienta ~]$ sudo -i
      [root@clienta ~]# rbd map --pool rbd test
      /dev/rbd0
      [root@clienta ~]# mkfs.xfs /dev/rbd0
      ...output omitted...
    2. Mount /dev/rbd0 to the /mnt/rbd directory and copy a file to it.

      [root@clienta ~]# mount /dev/rbd0 /mnt/rbd
      [root@clienta ~]# mount | grep rbd
      /dev/rbd0 on /mnt/rbd type xfs (rw,relatime,seclabel,attr2,inode64,...)
      [root@clienta ~]# cp /etc/ceph/ceph.conf /mnt/rbd/file0
      [root@clienta ~]# ls /mnt/rbd
      file0
    3. Unmount your file system to ensure that the system flushes all data to the Ceph cluster.

      [root@clienta ~]# umount /mnt/rbd
  4. Create a backup copy of the primary rbd/test block device. Export the entire rbd/test image to a file called /mnt/export.dat. Copy the export.dat file to the secondary cluster.

    1. In the primary cluster, run the cephadm shell using the --mount argument to bind mount the /home/admin/rbd-export/ directory.

      [root@clienta ~]# cephadm shell --mount /home/admin/rbd-export/
      ...output omitted...
      [ceph: root@clienta /]#
    2. Export the entire rbd/test image to a file called /mnt/export.dat. Exit the cephadm shell.

      [ceph: root@clienta /]# rbd export rbd/test /mnt/export.dat
      Exporting image: 100% complete...done.
      [ceph: root@clienta /]# exit
      exit
      [root@clienta ~]$
    3. Copy the export.dat file to the secondary cluster in the /home/admin/rbd-import/ directory.

      [root@clienta ~]# rsync -avP /home/admin/rbd-export/export.dat \
       serverf:/home/admin/rbd-import
      ...output omitted...
  5. In the secondary cluster, import the /mnt/export.dat file containing the exported rbd/test RBD image into the secondary cluster. Confirm that the import was successful by mapping the imported image to a block device, mounting it, and inspecting its contents.

    1. Exit the current cepdadm shell. Use sudo to run the cephadm shell with the --mount argument to bind mount the /home/admin/rbd-import/ directory.

      [ceph: root@serverf /]# exit
      [admin@serverf ~]$ sudo cephadm shell --mount /home/admin/rbd-import/
      [ceph: root@serverf /]#
    2. List the contents of the backup cluster's empty rbd pool. Use the rbd import command to import the RBD image contained in the /mnt/export.dat file into the backup cluster, referring to it as rbd/test.

      [ceph: root@serverf /]# rbd --pool rbd ls
      [ceph: root@serverf /]# rbd import /mnt/export.dat rbd/test
      Importing image: 100% complete...done.
      [ceph: root@serverf /]# rbd du --pool rbd test
      NAME  PROVISIONED  USED
      test      128 MiB  32 MiB
    3. Exit the cephadm shell, and then switch to the root user. Map the backup cluster's imported RBD image and mount the file system it contains. Confirm that its contents are the same as those originally created on the primary cluster's RBD image.

      [ceph: root@serverf /]# exit
      exit
      [admin@serverf ~]$ sudo -i
      [root@serverf ~]# rbd map --pool rbd test
      /dev/rbd0
      [root@serverf ~]# mount /dev/rbd0 /mnt/rbd
      [root@serverf ~]# mount | grep rbd
      /dev/rbd0 on /mnt/rbd type xfs (rw,relatime,seclabel,attr2,inode64,...)
      [root@serverf ~]# df -h /mnt/rbd
      Filesystem      Size  Used Avail Use% Mounted on
      /dev/rbd0       121M  7.8M  113M   7% /mnt/rbd
      [root@serverf ~]# ls -l /mnt/rbd
      total 4
      -rw-r--r--. 1 admin users 177 Sep 30 22:02 file0
      [root@serverf ~]# cat /mnt/rbd/file0
      # minimal ceph.conf for c315020c-21f0-11ec-b6d6-52540000fa0c
      [global]
      	fsid = c315020c-21f0-11ec-b6d6-52540000fa0c
      	mon_host = [v2:172.25.250.12:3300/0,v1:172.25.250.12:6789/0]
    4. Unmount the file system and unmap the RBD image.

      [root@serverf ~]# umount /mnt/rbd
      [root@serverf ~]# rbd unmap /dev/rbd0
  6. In this part of the exercise, you will create a pair of snapshots of rbd/test on your primary cluster and export the changes between those snapshots as an incremental diff image. You will then import the changes from the incremental diff into your copy of the rbd/test image on your secondary cluster.

    1. In the primary cluster, run the cephadm shell and create an initial snapshot called rbd/test@firstsnap. Calculate the provisioned and actual disk usage of the rbd/test image and its associated snapshots.

      [root@clienta ~]# cephadm shell
      ...output omitted...
      [ceph: root@clienta /]# rbd snap create rbd/test@firstsnap
      Creating snap: 100% complete...done.
      [ceph: root@clienta /]# rbd du --pool rbd test
      NAME            PROVISIONED  USED
      test@firstsnap      128 MiB  36 MiB
      test                128 MiB  36 MiB
      <TOTAL>             128 MiB  72 MiB
      [ceph: root@clienta /]# exit
      exit
      [root@clienta ~]#
    2. In the secondary cluster, run the cephadm shell, create an initial snapshot called rbd/test@firstsnap. Calculate the provisioned and actual disk usage of the rbd/test image and its associated snapshots.

      [root@serverf ~]$ cephadm shell
      ...output omitted...
      [ceph: root@serverf /]# rbd snap create rbd/test@firstsnap
      Creating snap: 100% complete...done.
      [ceph: root@serverf /]# rbd du --pool rbd test
      NAME            PROVISIONED  USED
      test@firstsnap      128 MiB  32 MiB
      test                128 MiB  32 MiB
      <TOTAL>             128 MiB  64 MiB
      [ceph: root@serverf /]# exit
      exit
      [root@clientf ~]#
    3. In the primary cluster, mount the file system on the /dev/rbd0 device, mapped from the rbd/test image, to change the RBD image. Make changes to the file system to effect changes to the RBD image. Unmount the file system when you are finished.

      [root@clienta ~]# mount /dev/rbd0 /mnt/rbd
      [root@clienta ~]# dd if=/dev/zero of=/mnt/rbd/file1 bs=1M count=5
      ...output omitted...
      [root@clienta ~]# ls -l /mnt/rbd/
      total 5124
      -rw-r--r--. 1 admin users     177 Sep 30 22:02 file0
      -rw-r--r--. 1 admin users 5242880 Sep 30 23:15 file1
      [root@clienta ~]$ umount /mnt/rbd
    4. In the primary cluster, run the cephadm shell and note that the amount of data used in the image of the primary cluster increased. Create a new snapshot called rbd/test@secondsnap to delimit the ending time window of the changes that you want to export. Note the adjustments made to the reported used data.

      [root@clienta ~]# cephadm shell
      ...output omitted...
      [ceph: root@clienta /]# rbd du --pool rbd test
      NAME            PROVISIONED  USED
      test@firstsnap      128 MiB  36 MiB
      test                128 MiB  40 MiB
      <TOTAL>             128 MiB  76 MiB
      [ceph: root@clienta /]# rbd snap create rbd/test@secondsnap
      Creating snap: 100% complete...done.
      [ceph: root@clienta /]# rbd du --pool rbd test
      NAME             PROVISIONED  USED
      test@firstsnap       128 MiB  36 MiB
      test@secondsnap      128 MiB  40 MiB
      test                 128 MiB  12 MiB
      <TOTAL>              128 MiB  88 MiB
    5. In the primary cluster, exit the current cepdadm shell. Run the cephadm shell with the --mount argument to bind mount the /home/admin/rbd-import/ directory.

      [ceph: root@clienta /]# exit
      exit
      [root@clienta ~]# cephadm shell --mount /home/admin/rbd-export/
      ...output omitted...
      [ceph: root@clienta /]#
    6. Export the changes between the snapshots of the primary cluster's rbd/test image to a file called /mnt/export-diff.dat. Exit the cephadm shell, and copy the export-diff.dat file to the secondary cluster in the /home/admin/rbd-import/ directory.

      [ceph: root@clienta /]# rbd export-diff --from-snap firstsnap \
      rbd/test@secondsnap /mnt/export-diff.dat
      Exporting image: 100% complete...done.
      [ceph: root@clienta /]# exit
      exit
      [root@clienta ~]$ rsync -avP /home/admin/rbd-export/export-diff.dat \
       serverf:/home/admin/rbd-import
      ...output omitted...
    7. In the secondary cluster, run the cephadm shell using the --mount argument to mount the /home/admin/rbd-import/ directory. Use the rbd import-diff command to import the changes to the secondary cluster's copy of the rbd/test image by using the /mnt/export-diff.dat file. This eliminates the need to save the exported image to a file as an intermediate step. Inspect the information about the remote RBD image. Exit the cephadm shell.

      [root@serverf ~]# cephadm shell --mount /home/admin/rbd-import
      [ceph: root@serverf /]# rbd du --pool rbd test
      NAME            PROVISIONED  USED
      test@firstsnap      128 MiB  32 MiB
      test                128 MiB  32 MiB
      <TOTAL>             128 MiB  64 MiB
      [ceph: root@serverf /]# rbd import-diff /mnt/rbd-import/export-diff.dat rbd/test
      Importing image diff: 100% complete...done.
      [ceph: root@serverf /]# rbd du --pool rbd test
      NAME             PROVISIONED  USED
      test@firstsnap       128 MiB  32 MiB
      test@secondsnap      128 MiB  32 MiB
      test                 128 MiB   8 MiB
      <TOTAL>              128 MiB  72 MiB
      [ceph: root@serverf /]# exit
      exit
      [root@serverf ~]#

      Note

      The end snapshot is present on the secondary cluster's RBD image. The rbd import-diff command automatically creates it.

    8. Verify that the backup cluster's image is identical to the primary cluster's RBD image.

      [root@serverf ~]# rbd map --pool rbd test
      /dev/rbd0
      [root@serverf ~]# mount /dev/rbd0 /mnt/rbd
      [root@serverf ~]# df /mnt/rbd
      Filesystem     1K-blocks  Used Available Use% Mounted on
      /dev/rbd0         123584 13064    110520  11% /mnt/rbd
      [root@serverf ~]# ls -l /mnt/rbd
      total 5124
      -rw-r--r--. 1 admin users     177 Sep 30 22:02 file0
      -rw-r--r--. 1 admin users 5242880 Sep 30 23:15 file1
      [root@serverf ~]# umount /mnt/rbd
  7. Clean up your environment.

    1. In the primary cluster, unmap the RBD image. Run the cephadm shell, purge all existing snapshots on the RBD image of both clusters, and then delete the RBD image.

      [root@clienta ~]# rbd unmap /dev/rbd0
      [root@clienta ~]# cephadm shell
      ...output omitted...
      [ceph: root@clienta /]# rbd --pool rbd snap purge test
      Removing all snapshots: 100% complete...done.
      [ceph: root@clienta /]# rbd rm test --pool rbd
      Removing image: 100% complete...done.
      [ceph: root@clienta /]# exit
      exit
      [root@clienta ~]#
    2. In the secondary cluster, unmap the RBD image. Run the cephadm shell, purge all existing snapshots on the RBD image of both clusters, and then delete the RBD image.

      [root@serverf ~]# rbd unmap /dev/rbd0
      [root@serverf ~]# cephadm shell
      ...output omitted...
      [ceph: root@serverf /]# rbd --pool rbd snap purge test
      Removing all snapshots: 100% complete...done.
      [ceph: root@serverf /]# rbd rm test --pool rbd
      Removing image: 100% complete...done.
      [ceph: root@serverf /]# exit
      exit
      [root@serverf ~]$
  8. Exit and close the second terminal. Return to workstation as the student user.

    [root@serverf ~]# exit
    [admin@serverf ~]$ exit
    [student@workstation ~]$ exit
    [root@clienta ~]# exit
    [admin@clienta ~]$ exit
    [student@workstation ~]$

Finish

On the workstation machine, use the lab command to complete this exercise. This is important to ensure that resources from previous exercises do not impact upcoming exercises.

[student@workstation ~]$ lab finish block-import

This concludes the guided exercise.

Revision: cl260-5.0-29d2128