In this review, you configure a Red Hat Ceph Storage cluster using specified requirements.
Outcomes
You should be able to configure cluster settings and components, such as pools, users, OSDs, and the CRUSH map.
If you did not reset your classroom virtual machines at the end of the last chapter, save any work you want to keep from earlier exercises on those machines and reset the classroom environment now.
Reset your environment before performing this exercise. All comprehensive review labs start with a clean, initial classroom environment that includes a pre-built, fully operational Ceph cluster. All remaining comprehensive reviews use the default Ceph cluster provided in the initial classroom environment.
As the student user on the workstation machine, use the lab command to prepare your system for this exercise.
This script ensures that all cluster hosts are reachable.
[student@workstation ~]$ lab start comprehensive-review2
Specifications
Set the value of osd_pool_default_pg_num to 250 in the configuration database.
Create a CRUSH rule called onhdd to target HDD-based OSDs for replicated pools.
Create a replicated pool called rbd1 that uses the onhdd CRUSH map rule.
Set the application type to rbd and the number of replicas for the objects in this pool to five.
Create the following CRUSH hierarchy. Do not associate any OSD with this new tree.
default-4-lab (root bucket)
DC01 (datacenter bucket)
firstfloor (room bucket)
hostc (host bucket)
secondfloor (room bucket)
hostd (host bucket)Create a new erasure code profile called cl260.
Pools using this profile must set two data chunks and one coding chunk per object.
Create an erasure coded pool called testec that uses your new cl260 profile.
Set its application type to rgw.
Create a user called client.fortestec that can store and retrieve objects under the docs namespace in the pool called testec.
This user must not have access to any other pool or namespace.
Save the associated key-ring file as /etc/ceph/ceph.client.fortestec.keyring on clienta.
Upload the /usr/share/doc/ceph/sample.ceph.conf file as an object called report under the docs namespace in the pool called testec.
Update the OSD near-capacity limit information for the serverc, serverd, and servere cluster.
Set the full ratio to 90% and the near-full ratio to 86%.
Locate the host on which the ceph-osd-7 service is running.
List the available storage devices on that host.
Set the value of osd_pool_default_pg_num to 250 in the configuration database.
Log in to clienta as the admin user and use sudo to run the cephadm shell.
[student@workstation ~]$ssh admin@clienta[admin@clienta ~]$sudo cephadm shell[ceph: root@clienta /]#
Set the value of osd_pool_default_pg_num to 250 in the configuration database.
[ceph: root@clienta /]# ceph config set mon osd_pool_default_pg_num 250Verify the setting.
[ceph: root@clienta /]#ceph config get mon osd_pool_default_pg_num250 [ceph: root@clienta /]#ceph config dump | grep osd_pool_default_pg_nummon advanced osd_pool_default_pg_num 250
Create a CRUSH rule called onhdd to target HDD-based OSDs for replicated pools.
Create a new rule called onhdd to target HDD-based OSDs for replicated pools.
[ceph: root@clienta /]# ceph osd crush rule create-replicated onhdd default \
host hddVerify that the new rule exists.
[ceph: root@clienta /]#ceph osd crush rule lsreplicated_ruleonhdd
Create a replicated pool called rbd1 that uses the onhdd CRUSH map rule.
Set the application type to rbd and the number of replicas for the objects in this pool to five.
Create a new replicated pool called rbd1 that uses the onhdd CRUSH map rule.
[ceph: root@clienta /]# ceph osd pool create rbd1 onhdd
pool 'rbd1' createdSet rbd as the application type for the pool.
[ceph: root@clienta /]# ceph osd pool application enable rbd1 rbd
enabled application 'rbd' on pool 'rbd1'Increase the number of replicas for the pool to five and verify the new value.
[ceph: root@clienta /]#ceph osd pool set rbd1 size 5set pool 6 size to 5 [ceph: root@clienta /]#ceph osd pool ls detail...output omitted... pool 6 'rbd1'replicated size 5min_size 3crush_rule 1object_hash rjenkins pg_num 250 pgp_num 250 autoscale_mode on last_change 235 flags hashpspool stripe_width 0application rbd
Create the following CRUSH hierarchy. Do not associate any OSD with this new tree.
default-4-lab (root bucket)
DC01 (datacenter bucket)
firstfloor (room bucket)
hostc (host bucket)
secondfloor (room bucket)
hostd (host bucket)Create the buckets.
[ceph: root@clienta /]#ceph osd crush add-bucket default-4-lab rootadded bucket default-4-lab type root to crush map [ceph: root@clienta /]#ceph osd crush add-bucket DC01 datacenteradded bucket DC01 type datacenter to crush map [ceph: root@clienta /]#ceph osd crush add-bucket firstfloor roomadded bucket firstfloor type room to crush map [ceph: root@clienta /]#ceph osd crush add-bucket hostc hostadded bucket hostc type host to crush map [ceph: root@clienta /]#ceph osd crush add-bucket secondfloor roomadded bucket secondfloor type room to crush map [ceph: root@clienta /]#ceph osd crush add-bucket hostd hostadded bucket hostd type host to crush map
Build the hierarchy.
[ceph: root@clienta /]#ceph osd crush move DC01 root=default-4-labmoved item id -10 name 'DC01' to location {root=default-4-lab} in crush map [ceph: root@clienta /]#ceph osd crush move firstfloor datacenter=DC01moved item id -11 name 'firstfloor' to location {datacenter=DC01} in crush map [ceph: root@clienta /]#ceph osd crush move hostc room=firstfloormoved item id -12 name 'hostc' to location {room=firstfloor} in crush map [ceph: root@clienta /]#ceph osd crush move secondfloor datacenter=DC01moved item id -13 name 'secondfloor' to location {datacenter=DC01} in crush map [ceph: root@clienta /]#ceph osd crush move hostd room=secondfloormoved item id -14 name 'hostd' to location {room=secondfloor} in crush map
Display the CRUSH map tree to verify the new hierarchy.
[ceph: root@clienta /]#ceph osd crush treeID CLASS WEIGHT TYPE NAME -9 0root default-4-lab-10 0datacenter DC01-11 0room firstfloor-12 0host hostc-13 0room secondfloor-14 0host hostd-1 0.08817 root default -3 0.02939 host serverc 0 hdd 0.00980 osd.0 1 hdd 0.00980 osd.1 2 hdd 0.00980 osd.2 -7 0.02939 host serverd 3 hdd 0.00980 osd.3 5 hdd 0.00980 osd.5 7 hdd 0.00980 osd.7 -5 0.02939 host servere 4 hdd 0.00980 osd.4 6 hdd 0.00980 osd.6 8 hdd 0.00980 osd.8
Create a new erasure code profile called cl260.
Pools that use this profile must set two data chunks and one coding chunk per object.
Create a new erasure code profile called cl260.
[ceph: root@clienta /]# ceph osd erasure-code-profile set cl260 k=2 m=1Verify the new erasure code profile parameters.
[ceph: root@clienta /]#ceph osd erasure-code-profile get cl260crush-device-class= crush-failure-domain=host crush-root=default jerasure-per-chunk-alignment=falsek=2m=1plugin=jerasure technique=reed_sol_van w=8
Create an erasure coded pool called testec that uses your new cl260 profile.
Set its application type to rgw.
Create an erasure coded pool called testec that uses the cl260 profile.
[ceph: root@clienta /]# ceph osd pool create testec erasure cl260
pool 'testec' createdSet rgw as the application type for the pool.
[ceph: root@clienta /]# ceph osd pool application enable testec rgw
enabled application 'rgw' on pool 'testec'List the new pool parameters.
[ceph: root@clienta /]#ceph osd pool ls detail...output omitted... pool 7 'testec'erasure profile cl260 size 3min_size 2crush_rule 2object_hash rjenkins pg_num 250 pgp_num 250 autoscale_mode on last_change 309 flags hashpspool stripe_width 8192application rgw
Create a user called client.fortestec that can store and retrieve objects under the docs namespace in the pool called testec.
This user must not have access to any other pool or namespace.
Save the associated key-ring file as /etc/ceph/ceph.client.fortestec.keyring on clienta.
Exit from the current cephadm shell.
Start a new cephadm shell with the /etc/ceph directory as a bind mount.
[ceph: root@clienta /]#exitexit [admin@clienta ~]$sudo cephadm shell --mount /etc/ceph/:/etc/ceph
Create a user called client.fortestec, with read and write capabilities in the namespace docs within the pool testec.
Save the associated key-ring file as /etc/ceph/ceph.client.fortestec.keyring in the mounted directory.
[ceph: root@clienta /]# ceph auth get-or-create client.fortestec mon 'allow r' \
osd 'allow rw pool=testec namespace=docs' \
-o /etc/ceph/ceph.client.fortestec.keyringTo verify your work, attempt to store and retrieve an object.
The diff command returns no output when the file contents are the same.
When finished, remove the object.
[ceph: root@clienta /]#rados --id fortestec -p testec -N docs \ put testdoc /etc/services[ceph: root@clienta /]#rados --id fortestec -p testec -N docs \ get testdoc /tmp/test[ceph: root@clienta /]#diff /etc/services /tmp/test[ceph: root@clienta /]#rados --id fortestec -p testec -N docs rm testdoc
Upload the /usr/share/doc/ceph/sample.ceph.conf file as an object called report under the docs namespace in the pool called testec.
Use the rados command to upload the /usr/share/doc/ceph/sample.ceph.conf file.
[ceph: root@clienta ~]# rados --id fortestec -p testec -N docs \
put report /usr/share/doc/ceph/sample.ceph.confObtain report object details to confirm that the upload was successful.
[ceph: root@clienta ~]# rados --id fortestec -p testec -N docs stat report
testec/report mtime 2021-10-29T11:44:21.000000+0000, size 19216Update the OSD near-capacity limit information for the cluster.
Set the full ratio to 90% and the near-full ratio to 86%.
Set the full_ratio parameter to 0.9 (90%) and the nearfull_ratio to 0.86 (86%) in the OSD map.
[ceph: root@clienta ~]#ceph osd set-full-ratio 0.9osd set-full-ratio 0.9 [ceph: root@clienta ~]#ceph osd set-nearfull-ratio 0.86osd set-nearfull-ratio 0.86
Dump the OSD map and verify the new value of the two parameters.
[ceph: root@clienta ~]#ceph osd dump | grep ratiofull_ratio 0.9backfillfull_ratio 0.9nearfull_ratio 0.86
Locate the host with the OSD 7 service.
List that host's available storage devices.
Locate the OSD 7 service.
The location of the OSD 7 service might be different in your lab environment.
[ceph: root@clienta ~]#ceph osd find osd.7{ "osd": 7, "addrs": { "addrvec": [ { "type": "v2", "addr": "172.25.250.13:6816", "nonce": 1160376750 }, { "type": "v1", "addr": "172.25.250.13:6817", "nonce": 1160376750 } ] }, "osd_fsid": "53f9dd65-430a-4e5a-a2f6-536c5453f02a","host": "serverd.lab.example.com","crush_location": { "host": "serverd", "root": "default" } }
Use the ceph orch device ls command to list the available storage devices on the located host.
Use the host you located in your environment.
[ceph: root@clienta ~]# ceph orch device ls --hostname=serverd.lab.example.com
Hostname Path Type Serial Size Health Ident Fault Available
serverd.lab.example.com /dev/vde hdd 65d5b32d-594c-4dbe-b 10.7G Unknown N/A N/A Yes
serverd.lab.example.com /dev/vdf hdd 63124a05-de2b-434f-8 10.7G Unknown N/A N/A Yes
serverd.lab.example.com /dev/vdb hdd e20fad81-6237-409e-9 10.7G Unknown N/A N/A No
serverd.lab.example.com /dev/vdc hdd 6dad9f98-2a5a-4aa8-b 10.7G Unknown N/A N/A No
serverd.lab.example.com /dev/vdd hdd 880d431d-15f3-4c20-b 10.7G Unknown N/A N/A NoReturn to workstation as the student user.
[ceph: root@clienta /]#exit[admin@clienta ~]$exit[student@workstation ~]$
This concludes the lab.