In this exercise, you will modify and verify OSD maps for a common use case.
Outcomes
You should be able to display the OSD map and modify the OSD near-full and full ratios.
As the student user on the workstation machine, use the lab command to prepare your system for this exercise.
[student@workstation ~]$ lab start map-osd
This command confirms that the hosts required for this exercise are accessible.
It resets the full_ratio and nearfull_ratio settings to the default values, and installs the ceph-base package on servera.
Procedure 5.2. Instructions
Log in to clienta as the admin user and use sudo to run the cephadm shell.
Verify that the cluster status is HEALTH_OK.
[student@workstation ~]$ssh admin@clienta[admin@clienta ~]$sudo cephadm shell[ceph: root@clienta /]#ceph healthHEALTH_OK
Run the ceph osd dump command to display the OSD map.
Record the current epoch value in your lab environment.
Record the value of the full_ratio and nearfull_ratio settings.
Verify that the status of each OSD is up and in.
[ceph: root@clienta /]#ceph osd dumpepoch 478fsid 11839bde-156b-11ec-bb71-52540000fa0c created 2021-09-14T14:50:39.401260+0000 modified 2021-09-27T12:04:26.832212+0000 flags sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit crush_version 69 full_ratio 0.95 backfillfull_ratio 0.9 nearfull_ratio 0.85 require_min_compat_client luminous min_compat_client luminous require_osd_release pacific stretch_mode_enabled falsepool 1'device_health_metrics' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 475 flags hashpspool stripe_width 0 pg_num_min 1 application mgr_devicehealth ...output omitted...osd.0up in weight 1 up_from 471 up_thru 471 down_at 470 last_clean_interval [457,466) [v2:172.25.250.12:6801/1228351148,v1:172.25.250.12:6802/1228351148] [v2:172.25.249.12:6803/1228351148,v1:172.25.249.12:6804/1228351148] exists,up cfe311b0-dea9-4c0c-a1ea-42aaac4cb160 ...output omitted...
Set the full_ratio and nearfull_ratio and verify the values.
Set the full_ratio parameter to 0.97 (97%) and nearfull_ratio to 0.9 (90%).
[ceph: root@clienta /]#ceph osd set-full-ratio 0.97osd set-full-ratio 0.97 [ceph: root@clienta /]#ceph osd set-nearfull-ratio 0.9osd set-nearfull-ratio 0.9
Verify the full_ratio and nearfull_ratio values.
Compare this epoch value with the value from the previous dump of the OSD map.
The epoch has incremented two versions because each ceph osd set-*-ratio command produces a new OSD map version.
[ceph: root@clienta /]#ceph osd dumpepoch 480fsid 11839bde-156b-11ec-bb71-52540000fa0c created 2021-09-14T14:50:39.401260+0000 modified 2021-09-27T12:27:38.328351+0000 flags sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit crush_version 69full_ratio 0.97backfillfull_ratio 0.9nearfull_ratio 0.9...output omitted...
Extract and view the OSD map.
Instead of using the ceph osd dump command, use the ceph osd getmap command to extract a copy of the OSD map to a binary file and the osdmaptool command to view the file.
Use the ceph osd getmap command to save a copy of the OSD map in the map.bin file.
[ceph: root@clienta /]# ceph osd getmap -o map.bin
got osdmap epoch 480Use the osdmaptool --print command to display the text version of the binary OSD map.
The output is similar to the output of the ceph osd dump command.
[ceph: root@clienta /]# osdmaptool --print map.bin
osdmaptool: osdmap file 'map.bin'
epoch 480
fsid 11839bde-156b-11ec-bb71-52540000fa0c
created 2021-09-14T14:50:39.401260+0000
modified 2021-09-27T12:27:38.328351+0000
flags sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit
crush_version 69
full_ratio 0.97
backfillfull_ratio 0.9
nearfull_ratio 0.9
...output omitted...Extract and decompile the current CRUSH map, then compile and import the CRUSH map. You will not change any map settings, but only observe the change in the epoch.
Use the osdmaptool --export-crush command to extract a binary copy of the CRUSH map and save it in the crush.bin file.
[ceph: root@clienta /]# osdmaptool --export-crush crush.bin map.bin
osdmaptool: osdmap file 'map.bin'
osdmaptool: exported crush map to crush.binUse the crushtool command to decompile the binary CRUSH map.
[ceph: root@clienta /]# crushtool -d crush.bin -o crush.txtUse the crushtool command to compile the CRUSH map using the crush.txt file. Send the output to the crushnew.bin file.
[ceph: root@clienta /]# crushtool -c crush.txt -o crushnew.binUse the osdmaptool --import-crush command to import the new binary CRUSH map into a copy of the binary OSD map.
[ceph: root@clienta /]#cp map.bin mapnew.bin[ceph: root@clienta /]#osdmaptool --import-crush crushnew.bin mapnew.binosdmaptool: osdmap file 'mapnew.bin' osdmaptool: imported 1300 byte crush map from crushnew.bin osdmaptool: writing epoch 482 to mapnew.bin
Use the osdmaptool command to test the impact of changes to the CRUSH map before applying them in production.
Run the osdmaptool --test-map-pgs-dump command to display the mapping between PGs and OSDs.
The osdmaptool command output might be different in your lab environment.
[ceph: root@clienta /]# osdmaptool --test-map-pgs-dump mapnew.bin
osdmaptool: osdmap file 'mapnew.bin'
pool 3 pg_num 32
1.0 [3,7,2] 3
1.1 [7,0,5] 7
1.2 [4,0,8] 4
...output omitted...Return to workstation as the student user.
[ceph: root@clienta /]#exit[admin@clienta ~]$exit[student@workstation ~]$
This concludes the guided exercise.