Bookmark this page

Lab: Creating and Customizing Storage Maps

In this lab, you will modify the CRUSH map, create a CRUSH rule, and set the CRUSH tunables profile.

Outcomes

You should be able to create a new CRUSH hierarchy and move OSDs into it, create a CRUSH rule and configure a replicated pool to use it, and set the CRUSH tunables profile.

As the student user on the workstation machine, use the lab command to prepare your system for this lab.

[student@workstation ~]$ lab start map-review

This command confirms that the hosts required for this exercise are accessible, backs up the CRUSH map, and sets the mon_allow_pool_delete setting to true.

Procedure 5.3. Instructions

  1. Create a new CRUSH hierarchy under root=review-cl260 that has two data center buckets (dc1 and dc2), two rack buckets (rack1 and rack2), one in each data center, and two host buckets (hostc and hostd), one in each rack.

    Place osd.1 and osd.2 into dc1, rack1, hostc.

    Place osd.3 and osd.4 into dc2, rack2, hostd.

    1. Log in to clienta as the admin user and use sudo to run the cephadm shell.

      [student@workstation ~]$ ssh admin@clienta
      [admin@clienta ~]$ sudo cephadm shell
      [ceph: root@clienta /]#
    2. Create the buckets with the ceph osd crush add-bucket command.

      [ceph: root@clienta /]# ceph osd crush add-bucket review-cl260 root
      added bucket review-cl260 type root to crush map
      [ceph: root@clienta /]# ceph osd crush add-bucket dc1 datacenter
      added bucket dc1 type datacenter to crush map
      [ceph: root@clienta /]# ceph osd crush add-bucket dc2 datacenter
      added bucket dc2 type datacenter to crush map
      [ceph: root@clienta /]# ceph osd crush add-bucket rack1 rack
      added bucket rack1 type rack to crush map
      [ceph: root@clienta /]# ceph osd crush add-bucket rack2 rack
      added bucket rack2 type rack to crush map
      [ceph: root@clienta /]# ceph osd crush add-bucket hostc host
      added bucket hostc type host to crush map
      [ceph: root@clienta /]# ceph osd crush add-bucket hostd host
      added bucket hostd type host to crush map
    3. Use the ceph osd crush move command to build the hierarchy.

      [ceph: root@clienta /]# ceph osd crush move dc1 root=review-cl260
      moved item id -10 name 'dc1' to location {root=review-cl260} in crush map
      [ceph: root@clienta /]# ceph osd crush move dc2 root=review-cl260
      moved item id -11 name 'dc2' to location {root=review-cl260} in crush map
      [ceph: root@clienta /]# ceph osd crush move rack1 datacenter=dc1
      moved item id -12 name 'rack1' to location {datacenter=dc1} in crush map
      [ceph: root@clienta /]# ceph osd crush move rack2 datacenter=dc2
      moved item id -13 name 'rack2' to location {datacenter=dc2} in crush map
      [ceph: root@clienta /]# ceph osd crush move hostc rack=rack1
      moved item id -14 name 'hostc' to location {rack=rack1} in crush map
      [ceph: root@clienta /]# ceph osd crush move hostd rack=rack2
      moved item id -15 name 'hostd' to location {rack=rack2} in crush map
    4. Place the OSDs as leaves in the new tree and set all OSD weights to 1.0.

      [ceph: root@clienta /]# ceph osd crush set osd.1 1.0 root=review-cl260 \
      datacenter=dc1 rack=rack1 host=hostc
      set item id 1 name 'osd.1' weight 1 at location {datacenter=dc1,host=hostc,rack=rack1,root=review-cl260} to crush map
      [ceph: root@clienta /]# ceph osd crush set osd.2 1.0 root=review-cl260 \
      datacenter=dc1 rack=rack1 host=hostc
      set item id 2 name 'osd.2' weight 1 at location {datacenter=dc1,host=hostc,rack=rack1,root=review-cl260} to crush map
      [ceph: root@clienta /]# ceph osd crush set osd.3 1.0 root=review-cl260 \
      datacenter=dc2 rack=rack2 host=hostd
      set item id 3 name 'osd.3' weight 1 at location {datacenter=dc2,host=hostd,rack=rack2,root=review-cl260} to crush map
      [ceph: root@clienta /]# ceph osd crush set osd.4 1.0 root=review-cl260 \
      datacenter=dc1 rack=rack2 host=hostd
      set item id 4 name 'osd.4' weight 1 at location {datacenter=dc1,host=hostd,rack=rack2,root=review-cl260} to crush map
    5. Display the CRUSH map tree to verify the new hierarchy and OSD locations.

      [ceph: root@clienta /]# ceph osd tree
      ID  CLASS WEIGHT  TYPE NAME              STATUS REWEIGHT PRI-AFF
       -9       4.00000 root review-cl260
      -10       2.00000     datacenter dc1
      -12       2.00000         rack rack1
      -14       2.00000             host hostc
        1   hdd 1.00000                 osd.1      up  1.00000 1.00000
        2   hdd 1.00000                 osd.2      up  1.00000 1.00000
      -11       2.00000     datacenter dc2
      -13       2.00000         rack rack2
      -15       2.00000             host hostd
        3   hdd 1.00000                 osd.3      up  1.00000 1.00000
        4   hdd 1.00000                 osd.4      up  1.00000 1.00000
       -1       0.04898 root default
       -3       0.00980     host serverc
        0   hdd 0.00980         osd.0              up  1.00000 1.00000
       -5       0.00980     host serverd
        5   hdd 0.00980         osd.5              up  1.00000 1.00000
       -7       0.02939     host servere
        6   hdd 0.00980         osd.6              up  1.00000 1.00000
        7   hdd 0.00980         osd.7              up  1.00000 1.00000
        8   hdd 0.00980         osd.8              up  1.00000 1.00000
  2. Add a CRUSH rule called replicated1 of type replicated. Set the root to review-cl260 and the failure domain to datacenter.

    1. Use the ceph osd crush rule create-replicated command to create the rule.

      [ceph: root@clienta /]# ceph osd crush rule create-replicated replicated1 \
      review-cl260 datacenter
    2. Verify that the replicated1 CRUSH rule was created correctly. Record the CRUSH rule ID, it might be different in your lab environment.

      [ceph: root@clienta /]# ceph osd crush rule dump | grep -B2 -A 20 replicated1
          {
              "rule_id": 1,
              "rule_name": "replicated1",
              "ruleset": 1,
              "type": 1,
              "min_size": 1,
              "max_size": 10,
              "steps": [
                  {
                      "op": "take",
                      "item": -9,
                      "item_name": "review-cl260"
                  },
                  {
                      "op": "chooseleaf_firstn",
                      "num": 0,
                      "type": "datacenter"
                  },
                  {
                      "op": "emit"
                  }
              ]
          }
  3. Create a new replicated pool called reviewpool with 64 PGs that use the new CRUSH rule from the previous step.

    1. Create the pool.

      [ceph: root@clienta /]# ceph osd pool create reviewpool 64 64 \
      replicated replicated1
      pool 'reviewpool' created
    2. Verify that the pool was created correctly. The pool ID and CRUSH rule ID might be different in your lab environment. Compare the CRUSH rule ID with the output of the previous step.

      [ceph: root@clienta /]# ceph osd pool ls detail | grep reviewpool
      pool 5 'reviewpool' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode warn last_change 155 flags hashpspool stripe_width 0
  4. Set CRUSH tunables to use the optimal profile.

    1. Set the CRUSH tunable profile to optimal.

      [ceph: root@clienta /]# ceph osd crush tunables optimal
      adjusted tunables profile to optimal
  5. Return to workstation as the student user.

    1. Return to workstation as the student user.

      [ceph: root@clienta /]# exit
      [admin@clienta ~]$ exit
      [student@workstation ~]$

Evaluation

Grade your work by running the lab grade map-review command from your workstation machine. Correct any reported failures and rerun the script until successful.

[student@workstation ~]$ lab grade map-review

Finish

On the workstation machine, use the lab command to complete this exercise. This is important to ensure that resources from previous exercises do not impact upcoming exercises.

[student@workstation ~]$ lab finish map-review

This concludes the lab.

Revision: cl260-5.0-29d2128