In this lab, you will modify the CRUSH map, create a CRUSH rule, and set the CRUSH tunables profile.
Outcomes
You should be able to create a new CRUSH hierarchy and move OSDs into it, create a CRUSH rule and configure a replicated pool to use it, and set the CRUSH tunables profile.
As the student user on the workstation machine, use the lab command to prepare your system for this lab.
[student@workstation ~]$ lab start map-review
This command confirms that the hosts required for this exercise are accessible, backs up the CRUSH map, and sets the mon_allow_pool_delete setting to true.
Procedure 5.3. Instructions
Create a new CRUSH hierarchy under root=review-cl260 that has two data center buckets (dc1 and dc2), two rack buckets (rack1 and rack2), one in each data center, and two host buckets (hostc and hostd), one in each rack.
Place osd.1 and osd.2 into dc1, rack1, hostc.
Place osd.3 and osd.4 into dc2, rack2, hostd.
Log in to clienta as the admin user and use sudo to run the cephadm shell.
[student@workstation ~]$ssh admin@clienta[admin@clienta ~]$sudo cephadm shell[ceph: root@clienta /]#
Create the buckets with the ceph osd crush add-bucket command.
[ceph: root@clienta /]#ceph osd crush add-bucket review-cl260 rootadded bucket review-cl260 type root to crush map [ceph: root@clienta /]#ceph osd crush add-bucket dc1 datacenteradded bucket dc1 type datacenter to crush map [ceph: root@clienta /]#ceph osd crush add-bucket dc2 datacenteradded bucket dc2 type datacenter to crush map [ceph: root@clienta /]#ceph osd crush add-bucket rack1 rackadded bucket rack1 type rack to crush map [ceph: root@clienta /]#ceph osd crush add-bucket rack2 rackadded bucket rack2 type rack to crush map [ceph: root@clienta /]#ceph osd crush add-bucket hostc hostadded bucket hostc type host to crush map [ceph: root@clienta /]#ceph osd crush add-bucket hostd hostadded bucket hostd type host to crush map
Use the ceph osd crush move command to build the hierarchy.
[ceph: root@clienta /]#ceph osd crush move dc1 root=review-cl260moved item id -10 name 'dc1' to location {root=review-cl260} in crush map [ceph: root@clienta /]#ceph osd crush move dc2 root=review-cl260moved item id -11 name 'dc2' to location {root=review-cl260} in crush map [ceph: root@clienta /]#ceph osd crush move rack1 datacenter=dc1moved item id -12 name 'rack1' to location {datacenter=dc1} in crush map [ceph: root@clienta /]#ceph osd crush move rack2 datacenter=dc2moved item id -13 name 'rack2' to location {datacenter=dc2} in crush map [ceph: root@clienta /]#ceph osd crush move hostc rack=rack1moved item id -14 name 'hostc' to location {rack=rack1} in crush map [ceph: root@clienta /]#ceph osd crush move hostd rack=rack2moved item id -15 name 'hostd' to location {rack=rack2} in crush map
Place the OSDs as leaves in the new tree and set all OSD weights to 1.0.
[ceph: root@clienta /]#ceph osd crush set osd.1 1.0 root=review-cl260 \ datacenter=dc1 rack=rack1 host=hostcset item id 1 name 'osd.1' weight 1 at location {datacenter=dc1,host=hostc,rack=rack1,root=review-cl260} to crush map [ceph: root@clienta /]#ceph osd crush set osd.2 1.0 root=review-cl260 \ datacenter=dc1 rack=rack1 host=hostcset item id 2 name 'osd.2' weight 1 at location {datacenter=dc1,host=hostc,rack=rack1,root=review-cl260} to crush map [ceph: root@clienta /]#ceph osd crush set osd.3 1.0 root=review-cl260 \ datacenter=dc2 rack=rack2 host=hostdset item id 3 name 'osd.3' weight 1 at location {datacenter=dc2,host=hostd,rack=rack2,root=review-cl260} to crush map [ceph: root@clienta /]#ceph osd crush set osd.4 1.0 root=review-cl260 \ datacenter=dc1 rack=rack2 host=hostdset item id 4 name 'osd.4' weight 1 at location {datacenter=dc1,host=hostd,rack=rack2,root=review-cl260} to crush map
Display the CRUSH map tree to verify the new hierarchy and OSD locations.
[ceph: root@clienta /]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-9 4.00000 root review-cl260
-10 2.00000 datacenter dc1
-12 2.00000 rack rack1
-14 2.00000 host hostc
1 hdd 1.00000 osd.1 up 1.00000 1.00000
2 hdd 1.00000 osd.2 up 1.00000 1.00000
-11 2.00000 datacenter dc2
-13 2.00000 rack rack2
-15 2.00000 host hostd
3 hdd 1.00000 osd.3 up 1.00000 1.00000
4 hdd 1.00000 osd.4 up 1.00000 1.00000
-1 0.04898 root default
-3 0.00980 host serverc
0 hdd 0.00980 osd.0 up 1.00000 1.00000
-5 0.00980 host serverd
5 hdd 0.00980 osd.5 up 1.00000 1.00000
-7 0.02939 host servere
6 hdd 0.00980 osd.6 up 1.00000 1.00000
7 hdd 0.00980 osd.7 up 1.00000 1.00000
8 hdd 0.00980 osd.8 up 1.00000 1.00000Add a CRUSH rule called replicated1 of type replicated.
Set the root to review-cl260 and the failure domain to datacenter.
Use the ceph osd crush rule create-replicated command to create the rule.
[ceph: root@clienta /]# ceph osd crush rule create-replicated replicated1 \
review-cl260 datacenterVerify that the replicated1 CRUSH rule was created correctly.
Record the CRUSH rule ID, it might be different in your lab environment.
[ceph: root@clienta /]#ceph osd crush rule dump | grep -B2 -A 20 replicated1{"rule_id": 1,"rule_name": "replicated1", "ruleset": 1, "type": 1, "min_size": 1, "max_size": 10, "steps": [ { "op":"take", "item": -9, "item_name":"review-cl260"}, { "op":"chooseleaf_firstn", "num": 0, "type":"datacenter"}, { "op": "emit" } ] }
Create a new replicated pool called reviewpool with 64 PGs that use the new CRUSH rule from the previous step.
Create the pool.
[ceph: root@clienta /]# ceph osd pool create reviewpool 64 64 \
replicated replicated1
pool 'reviewpool' createdVerify that the pool was created correctly. The pool ID and CRUSH rule ID might be different in your lab environment. Compare the CRUSH rule ID with the output of the previous step.
[ceph: root@clienta /]#ceph osd pool ls detail | grep reviewpoolpool 5'reviewpool'replicated size 3 min_size 2crush_rule 1object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode warn last_change 155 flags hashpspool stripe_width 0
Set CRUSH tunables to use the optimal profile.
Return to workstation as the student user.
This concludes the lab.