Bookmark this page

Guided Exercise: Managing the OSD Map

In this exercise, you will modify and verify OSD maps for a common use case.

Outcomes

You should be able to display the OSD map and modify the OSD near-full and full ratios.

As the student user on the workstation machine, use the lab command to prepare your system for this exercise.

[student@workstation ~]$ lab start map-osd

This command confirms that the hosts required for this exercise are accessible. It resets the full_ratio and nearfull_ratio settings to the default values, and installs the ceph-base package on servera.

Procedure 5.2. Instructions

  1. Log in to clienta as the admin user and use sudo to run the cephadm shell. Verify that the cluster status is HEALTH_OK.

    [student@workstation ~]$ ssh admin@clienta
    [admin@clienta ~]$ sudo cephadm shell
    [ceph: root@clienta /]# ceph health
    HEALTH_OK
  2. Run the ceph osd dump command to display the OSD map. Record the current epoch value in your lab environment. Record the value of the full_ratio and nearfull_ratio settings.

    Verify that the status of each OSD is up and in.

    [ceph: root@clienta /]# ceph osd dump
    epoch 478
    fsid 11839bde-156b-11ec-bb71-52540000fa0c
    created 2021-09-14T14:50:39.401260+0000
    modified 2021-09-27T12:04:26.832212+0000
    flags sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit
    crush_version 69
    full_ratio 0.95
    backfillfull_ratio 0.9
    nearfull_ratio 0.85
    require_min_compat_client luminous
    min_compat_client luminous
    require_osd_release pacific
    stretch_mode_enabled false
    pool 1 'device_health_metrics' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 475 flags hashpspool stripe_width 0 pg_num_min 1 application mgr_devicehealth
    ...output omitted...
    osd.0 up   in  weight 1 up_from 471 up_thru 471 down_at 470 last_clean_interval [457,466) [v2:172.25.250.12:6801/1228351148,v1:172.25.250.12:6802/1228351148] [v2:172.25.249.12:6803/1228351148,v1:172.25.249.12:6804/1228351148] exists,up cfe311b0-dea9-4c0c-a1ea-42aaac4cb160
    ...output omitted...
  3. Set the full_ratio and nearfull_ratio and verify the values.

    1. Set the full_ratio parameter to 0.97 (97%) and nearfull_ratio to 0.9 (90%).

      [ceph: root@clienta /]# ceph osd set-full-ratio 0.97
      osd set-full-ratio 0.97
      [ceph: root@clienta /]# ceph osd set-nearfull-ratio 0.9
      osd set-nearfull-ratio 0.9
    2. Verify the full_ratio and nearfull_ratio values. Compare this epoch value with the value from the previous dump of the OSD map. The epoch has incremented two versions because each ceph osd set-*-ratio command produces a new OSD map version.

      [ceph: root@clienta /]# ceph osd dump
      epoch 480
      fsid 11839bde-156b-11ec-bb71-52540000fa0c
      created 2021-09-14T14:50:39.401260+0000
      modified 2021-09-27T12:27:38.328351+0000
      flags sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit
      crush_version 69
      full_ratio 0.97
      backfillfull_ratio 0.9
      nearfull_ratio 0.9
      ...output omitted...
  4. Extract and view the OSD map.

    1. Instead of using the ceph osd dump command, use the ceph osd getmap command to extract a copy of the OSD map to a binary file and the osdmaptool command to view the file.

      Use the ceph osd getmap command to save a copy of the OSD map in the map.bin file.

      [ceph: root@clienta /]# ceph osd getmap -o map.bin
      got osdmap epoch 480
    2. Use the osdmaptool --print command to display the text version of the binary OSD map. The output is similar to the output of the ceph osd dump command.

      [ceph: root@clienta /]# osdmaptool --print map.bin
      osdmaptool: osdmap file 'map.bin'
      epoch 480
      fsid 11839bde-156b-11ec-bb71-52540000fa0c
      created 2021-09-14T14:50:39.401260+0000
      modified 2021-09-27T12:27:38.328351+0000
      flags sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit
      crush_version 69
      full_ratio 0.97
      backfillfull_ratio 0.9
      nearfull_ratio 0.9
      ...output omitted...
  5. Extract and decompile the current CRUSH map, then compile and import the CRUSH map. You will not change any map settings, but only observe the change in the epoch.

    1. Use the osdmaptool --export-crush command to extract a binary copy of the CRUSH map and save it in the crush.bin file.

      [ceph: root@clienta /]# osdmaptool --export-crush crush.bin map.bin
      osdmaptool: osdmap file 'map.bin'
      osdmaptool: exported crush map to crush.bin
    2. Use the crushtool command to decompile the binary CRUSH map.

      [ceph: root@clienta /]# crushtool -d crush.bin -o crush.txt
    3. Use the crushtool command to compile the CRUSH map using the crush.txt file. Send the output to the crushnew.bin file.

      [ceph: root@clienta /]# crushtool -c crush.txt -o crushnew.bin
    4. Use the osdmaptool --import-crush command to import the new binary CRUSH map into a copy of the binary OSD map.

      [ceph: root@clienta /]# cp map.bin mapnew.bin
      [ceph: root@clienta /]# osdmaptool --import-crush crushnew.bin mapnew.bin
      osdmaptool: osdmap file 'mapnew.bin'
      osdmaptool: imported 1300 byte crush map from crushnew.bin
      osdmaptool: writing epoch 482 to mapnew.bin
  6. Use the osdmaptool command to test the impact of changes to the CRUSH map before applying them in production.

    1. Run the osdmaptool --test-map-pgs-dump command to display the mapping between PGs and OSDs. The osdmaptool command output might be different in your lab environment.

      [ceph: root@clienta /]# osdmaptool --test-map-pgs-dump mapnew.bin
      osdmaptool: osdmap file 'mapnew.bin'
      pool 3 pg_num 32
      1.0     [3,7,2] 3
      1.1     [7,0,5] 7
      1.2     [4,0,8] 4
      ...output omitted...
    2. Return to workstation as the student user.

      [ceph: root@clienta /]# exit
      [admin@clienta ~]$ exit
      [student@workstation ~]$

Finish

On the workstation machine, use the lab command to complete this exercise. This is important to ensure that resources from previous exercises do not impact upcoming exercises.

[student@workstation ~]$ lab finish map-osd

This concludes the guided exercise.

Revision: cl260-5.0-29d2128