Bookmark this page

Lab: Configuring Red Hat Ceph Storage

In this review, you configure a Red Hat Ceph Storage cluster using specified requirements.

Outcomes

You should be able to configure cluster settings and components, such as pools, users, OSDs, and the CRUSH map.

If you did not reset your classroom virtual machines at the end of the last chapter, save any work you want to keep from earlier exercises on those machines and reset the classroom environment now.

Important

Reset your environment before performing this exercise. All comprehensive review labs start with a clean, initial classroom environment that includes a pre-built, fully operational Ceph cluster. All remaining comprehensive reviews use the default Ceph cluster provided in the initial classroom environment.

As the student user on the workstation machine, use the lab command to prepare your system for this exercise.

This script ensures that all cluster hosts are reachable.

[student@workstation ~]$ lab start comprehensive-review2

Specifications

  • Set the value of osd_pool_default_pg_num to 250 in the configuration database.

  • Create a CRUSH rule called onhdd to target HDD-based OSDs for replicated pools.

  • Create a replicated pool called rbd1 that uses the onhdd CRUSH map rule. Set the application type to rbd and the number of replicas for the objects in this pool to five.

  • Create the following CRUSH hierarchy. Do not associate any OSD with this new tree.

    default-4-lab        (root bucket)
        DC01             (datacenter bucket)
            firstfloor   (room bucket)
                hostc    (host bucket)
            secondfloor  (room bucket)
                hostd    (host bucket)
  • Create a new erasure code profile called cl260. Pools using this profile must set two data chunks and one coding chunk per object.

  • Create an erasure coded pool called testec that uses your new cl260 profile. Set its application type to rgw.

  • Create a user called client.fortestec that can store and retrieve objects under the docs namespace in the pool called testec. This user must not have access to any other pool or namespace. Save the associated key-ring file as /etc/ceph/ceph.client.fortestec.keyring on clienta.

  • Upload the /usr/share/doc/ceph/sample.ceph.conf file as an object called report under the docs namespace in the pool called testec.

  • Update the OSD near-capacity limit information for the serverc, serverd, and servere cluster. Set the full ratio to 90% and the near-full ratio to 86%.

  • Locate the host on which the ceph-osd-7 service is running. List the available storage devices on that host.

  1. Set the value of osd_pool_default_pg_num to 250 in the configuration database.

    1. Log in to clienta as the admin user and use sudo to run the cephadm shell.

      [student@workstation ~]$ ssh admin@clienta
      [admin@clienta ~]$ sudo cephadm shell
      [ceph: root@clienta /]#
    2. Set the value of osd_pool_default_pg_num to 250 in the configuration database.

      [ceph: root@clienta /]# ceph config set mon osd_pool_default_pg_num 250
    3. Verify the setting.

      [ceph: root@clienta /]# ceph config get mon osd_pool_default_pg_num
      250
      [ceph: root@clienta /]# ceph config dump | grep osd_pool_default_pg_num
        mon                                           advanced  osd_pool_default_pg_num                250
  2. Create a CRUSH rule called onhdd to target HDD-based OSDs for replicated pools.

    1. Create a new rule called onhdd to target HDD-based OSDs for replicated pools.

      [ceph: root@clienta /]# ceph osd crush rule create-replicated onhdd default \
      host hdd
    2. Verify that the new rule exists.

      [ceph: root@clienta /]# ceph osd crush rule ls
      replicated_rule
      onhdd
  3. Create a replicated pool called rbd1 that uses the onhdd CRUSH map rule. Set the application type to rbd and the number of replicas for the objects in this pool to five.

    1. Create a new replicated pool called rbd1 that uses the onhdd CRUSH map rule.

      [ceph: root@clienta /]# ceph osd pool create rbd1 onhdd
      pool 'rbd1' created
    2. Set rbd as the application type for the pool.

      [ceph: root@clienta /]# ceph osd pool application enable rbd1 rbd
      enabled application 'rbd' on pool 'rbd1'
    3. Increase the number of replicas for the pool to five and verify the new value.

      [ceph: root@clienta /]# ceph osd pool set rbd1 size 5
      set pool 6 size to 5
      [ceph: root@clienta /]# ceph osd pool ls detail
      ...output omitted...
      pool 6 'rbd1' replicated size 5 min_size 3 crush_rule 1 object_hash rjenkins pg_num 250 pgp_num 250 autoscale_mode on last_change 235 flags hashpspool stripe_width 0 application rbd
  4. Create the following CRUSH hierarchy. Do not associate any OSD with this new tree.

    default-4-lab        (root bucket)
        DC01             (datacenter bucket)
            firstfloor   (room bucket)
                hostc    (host bucket)
            secondfloor  (room bucket)
                hostd    (host bucket)
    1. Create the buckets.

      [ceph: root@clienta /]# ceph osd crush add-bucket default-4-lab root
      added bucket default-4-lab type root to crush map
      [ceph: root@clienta /]# ceph osd crush add-bucket DC01 datacenter
      added bucket DC01 type datacenter to crush map
      [ceph: root@clienta /]# ceph osd crush add-bucket firstfloor room
      added bucket firstfloor type room to crush map
      [ceph: root@clienta /]# ceph osd crush add-bucket hostc host
      added bucket hostc type host to crush map
      [ceph: root@clienta /]# ceph osd crush add-bucket secondfloor room
      added bucket secondfloor type room to crush map
      [ceph: root@clienta /]# ceph osd crush add-bucket hostd host
      added bucket hostd type host to crush map
    2. Build the hierarchy.

      [ceph: root@clienta /]# ceph osd crush move DC01 root=default-4-lab
      moved item id -10 name 'DC01' to location {root=default-4-lab} in crush map
      [ceph: root@clienta /]# ceph osd crush move firstfloor datacenter=DC01
      moved item id -11 name 'firstfloor' to location {datacenter=DC01} in crush map
      [ceph: root@clienta /]# ceph osd crush move hostc room=firstfloor
      moved item id -12 name 'hostc' to location {room=firstfloor} in crush map
      [ceph: root@clienta /]# ceph osd crush move secondfloor datacenter=DC01
      moved item id -13 name 'secondfloor' to location {datacenter=DC01} in crush map
      [ceph: root@clienta /]# ceph osd crush move hostd room=secondfloor
      moved item id -14 name 'hostd' to location {room=secondfloor} in crush map
    3. Display the CRUSH map tree to verify the new hierarchy.

      [ceph: root@clienta /]# ceph osd crush tree
      ID   CLASS  WEIGHT   TYPE NAME
       -9               0  root default-4-lab
      -10               0      datacenter DC01
      -11               0          room firstfloor
      -12               0              host hostc
      -13               0          room secondfloor
      -14               0              host hostd
       -1         0.08817  root default
       -3         0.02939      host serverc
        0    hdd  0.00980          osd.0
        1    hdd  0.00980          osd.1
        2    hdd  0.00980          osd.2
       -7         0.02939      host serverd
        3    hdd  0.00980          osd.3
        5    hdd  0.00980          osd.5
        7    hdd  0.00980          osd.7
       -5         0.02939      host servere
        4    hdd  0.00980          osd.4
        6    hdd  0.00980          osd.6
        8    hdd  0.00980          osd.8
  5. Create a new erasure code profile called cl260. Pools that use this profile must set two data chunks and one coding chunk per object.

    1. Create a new erasure code profile called cl260.

      [ceph: root@clienta /]# ceph osd erasure-code-profile set cl260 k=2 m=1
    2. Verify the new erasure code profile parameters.

      [ceph: root@clienta /]# ceph osd erasure-code-profile get cl260
      crush-device-class=
      crush-failure-domain=host
      crush-root=default
      jerasure-per-chunk-alignment=false
      k=2
      m=1
      plugin=jerasure
      technique=reed_sol_van
      w=8
  6. Create an erasure coded pool called testec that uses your new cl260 profile. Set its application type to rgw.

    1. Create an erasure coded pool called testec that uses the cl260 profile.

      [ceph: root@clienta /]# ceph osd pool create testec erasure cl260
      pool 'testec' created
    2. Set rgw as the application type for the pool.

      [ceph: root@clienta /]# ceph osd pool application enable testec rgw
      enabled application 'rgw' on pool 'testec'
    3. List the new pool parameters.

      [ceph: root@clienta /]# ceph osd pool ls detail
      ...output omitted...
      pool 7 'testec' erasure profile cl260 size 3 min_size 2 crush_rule 2 object_hash rjenkins pg_num 250 pgp_num 250 autoscale_mode on last_change 309 flags hashpspool stripe_width 8192 application rgw
  7. Create a user called client.fortestec that can store and retrieve objects under the docs namespace in the pool called testec. This user must not have access to any other pool or namespace. Save the associated key-ring file as /etc/ceph/ceph.client.fortestec.keyring on clienta.

    1. Exit from the current cephadm shell. Start a new cephadm shell with the /etc/ceph directory as a bind mount.

      [ceph: root@clienta /]# exit
      exit
      [admin@clienta ~]$ sudo cephadm shell --mount /etc/ceph/:/etc/ceph
    2. Create a user called client.fortestec, with read and write capabilities in the namespace docs within the pool testec. Save the associated key-ring file as /etc/ceph/ceph.client.fortestec.keyring in the mounted directory.

      [ceph: root@clienta /]# ceph auth get-or-create client.fortestec mon 'allow r' \
      osd 'allow rw pool=testec namespace=docs' \
      -o /etc/ceph/ceph.client.fortestec.keyring
    3. To verify your work, attempt to store and retrieve an object. The diff command returns no output when the file contents are the same. When finished, remove the object.

      [ceph: root@clienta /]# rados --id fortestec -p testec -N docs \
      put testdoc /etc/services
      [ceph: root@clienta /]# rados --id fortestec -p testec -N docs \
      get testdoc /tmp/test
      [ceph: root@clienta /]# diff /etc/services /tmp/test
      [ceph: root@clienta /]# rados --id fortestec -p testec -N docs rm testdoc
  8. Upload the /usr/share/doc/ceph/sample.ceph.conf file as an object called report under the docs namespace in the pool called testec.

    1. Use the rados command to upload the /usr/share/doc/ceph/sample.ceph.conf file.

      [ceph: root@clienta ~]# rados --id fortestec -p testec -N docs \
      put report /usr/share/doc/ceph/sample.ceph.conf
    2. Obtain report object details to confirm that the upload was successful.

      [ceph: root@clienta ~]# rados --id fortestec -p testec -N docs stat report
      testec/report mtime 2021-10-29T11:44:21.000000+0000, size 19216
  9. Update the OSD near-capacity limit information for the cluster. Set the full ratio to 90% and the near-full ratio to 86%.

    1. Set the full_ratio parameter to 0.9 (90%) and the nearfull_ratio to 0.86 (86%) in the OSD map.

      [ceph: root@clienta ~]# ceph osd set-full-ratio 0.9
      osd set-full-ratio 0.9
      [ceph: root@clienta ~]# ceph osd set-nearfull-ratio 0.86
      osd set-nearfull-ratio 0.86
    2. Dump the OSD map and verify the new value of the two parameters.

      [ceph: root@clienta ~]# ceph osd dump | grep ratio
      full_ratio 0.9
      backfillfull_ratio 0.9
      nearfull_ratio 0.86
  10. Locate the host with the OSD 7 service. List that host's available storage devices.

    1. Locate the OSD 7 service. The location of the OSD 7 service might be different in your lab environment.

      [ceph: root@clienta ~]# ceph osd find osd.7
      {
          "osd": 7,
          "addrs": {
              "addrvec": [
                  {
                      "type": "v2",
                      "addr": "172.25.250.13:6816",
                      "nonce": 1160376750
                  },
                  {
                      "type": "v1",
                      "addr": "172.25.250.13:6817",
                      "nonce": 1160376750
                  }
              ]
          },
          "osd_fsid": "53f9dd65-430a-4e5a-a2f6-536c5453f02a",
          "host": "serverd.lab.example.com",
          "crush_location": {
              "host": "serverd",
              "root": "default"
          }
      }
    2. Use the ceph orch device ls command to list the available storage devices on the located host. Use the host you located in your environment.

      [ceph: root@clienta ~]# ceph orch device ls --hostname=serverd.lab.example.com
      Hostname                 Path      Type  Serial                Size   Health   Ident  Fault  Available
      serverd.lab.example.com  /dev/vde  hdd   65d5b32d-594c-4dbe-b  10.7G  Unknown  N/A    N/A    Yes
      serverd.lab.example.com  /dev/vdf  hdd   63124a05-de2b-434f-8  10.7G  Unknown  N/A    N/A    Yes
      serverd.lab.example.com  /dev/vdb  hdd   e20fad81-6237-409e-9  10.7G  Unknown  N/A    N/A    No
      serverd.lab.example.com  /dev/vdc  hdd   6dad9f98-2a5a-4aa8-b  10.7G  Unknown  N/A    N/A    No
      serverd.lab.example.com  /dev/vdd  hdd   880d431d-15f3-4c20-b  10.7G  Unknown  N/A    N/A    No
    3. Return to workstation as the student user.

      [ceph: root@clienta /]# exit
      [admin@clienta ~]$ exit
      [student@workstation ~]$

Evaluation

Grade your work by running the lab grade comprehensive-review2 command from your workstation machine. Correct any reported failures and rerun the script until successful.

[student@workstation ~]$ lab grade comprehensive-review2

Finish

As the student user on the workstation machine, use the lab command to complete this exercise. This is important to ensure that resources from previous exercises do not impact upcoming exercises.

[student@workstation ~]$ lab finish comprehensive-review2

This concludes the lab.

Revision: cl260-5.0-29d2128