In this exercise, you will expand the cluster by adding new OSDs to each of the nodes in the cluster, and by adding an OSD node to the Ceph cluster.
Outcomes
You should be able to expand your cluster by adding new OSDs.
Start this exercise only after having successfully completed the previous Guided Exercise: Deploying Red Hat Ceph Storage:
As the student user on the workstation machine, use the lab command to prepare your system for this exercise.
[student@workstation ~]$ lab start deploy-expand
This command confirms that the Ceph cluster is reachable and provides an example service specification file.
Procedure 2.2. Instructions
In this exercise, expand the amount of storage in your Ceph storage cluster.
List the inventory of storage devices on the cluster hosts.
Log in to serverc as the admin user and use sudo to run the cephadm shell.
[student@workstation ~]$ssh admin@clienta[admin@clienta ~]#sudo cephadm shell[ceph: root@clienta /]#
Display the storage device inventory on the Ceph cluster.
[ceph: root@clienta /]#ceph orch device lsHostname Path Type Serial Size Health Ident Fault Available clienta.lab.example.com /dev/vdb hdd f97f3019-2827-4b84-8 10.7G Unknown N/A N/A Yes clienta.lab.example.com /dev/vdc hdd fdaeb489-de2c-4165-9 10.7G Unknown N/A N/A Yes clienta.lab.example.com /dev/vdd hdd 0159ff1a-d45a-4ada-9 10.7G Unknown N/A N/A Yes clienta.lab.example.com /dev/vde hdd 69f43b17-9af3-45f1-b 10.7G Unknown N/A N/A Yes clienta.lab.example.com /dev/vdf hdd bfc36a25-1680-49eb-8 10.7G Unknown N/A N/A Yesserverc.lab.example.com/dev/vdchdd 8b06b0af-ff15-4350-b 10.7G Unknown N/A N/AYesserverc.lab.example.com/dev/vddhdd e15146bc-22dd-4970-a 10.7G Unknown N/A N/AYesserverc.lab.example.com /dev/vde hdd f24fe7d7-b400-44b8-b 10.7G Unknown N/A N/A Yes serverc.lab.example.com /dev/vdf hdd e5747c44-0afa-4918-8 10.7G Unknown N/A N/A Yes serverc.lab.example.com /dev/vdb hdd 8a8d3399-52d9-4da0-b 10.7G Unknown N/A N/A Noserverd.lab.example.com/dev/vdchdd e7f82a83-56f6-44f2-b 10.7G Unknown N/A N/AYesserverd.lab.example.com/dev/vddhdd fc290db7-fa22-4636-a 10.7G Unknown N/A N/AYesserverd.lab.example.com /dev/vde hdd 565c1d73-48c4-4448-a 10.7G Unknown N/A N/A Yes serverd.lab.example.com /dev/vdf hdd 90bf4d1f-83e5-4901-b 10.7G Unknown N/A N/A Yes serverd.lab.example.com /dev/vdb hdd 82dc7aff-3c2a-45bb-9 10.7G Unknown N/A N/A Noservere.lab.example.com/dev/vdchdd d11bb434-5829-4275-a 10.7G Unknown N/A N/AYesservere.lab.example.com/dev/vddhdd 68e406a5-9f0f-4954-9 10.7G Unknown N/A N/AYesservere.lab.example.com /dev/vde hdd 2670c8f2-acde-4948-8 10.7G Unknown N/A N/A Yes servere.lab.example.com /dev/vdf hdd 5628d1f0-bdbf-4b05-8 10.7G Unknown N/A N/A Yes servere.lab.example.com /dev/vdb hdd cb17228d-c039-45d3-b 10.7G Unknown N/A N/A No
Deploy two OSDs by using /dev/vdc and /dev/vdd on serverc.lab.example.com, serverd.lab.example.com, and servere.lab.example.com.
Create the osd_spec.yml file in the /var/lib/ceph/osd/ directory with the correct configuration.
For your convenience, you can copy and paste the content from the /root/expand-osd/osd_spec.yml file on clienta.
[ceph: root@clienta /]#exitexit [admin@clienta ~]#sudo cephadm shell --mount /root/expand-osd/osd_spec.yml[ceph: root@clienta /]#cd /mnt[ceph: root@clienta mnt]#cp osd_spec.yml /var/lib/ceph/osd/[ceph: root@clienta mnt]#cat /var/lib/ceph/osd/osd_spec.ymlservice_type: osd service_id: default_drive_group placement: hosts: - serverc.lab.example.com - serverd.lab.example.com - servere.lab.example.com data_devices: paths: - /dev/vdb - /dev/vdc - /dev/vdd
Deploy the osd_spec.yml file, then run the ceph orch apply command to implement the configuration.
[ceph: root@clienta mnt]# ceph orch apply -i /var/lib/ceph/osd/osd_spec.yml
Scheduled osd.default_drive_group update...Add OSDs to servere by using devices /dev/vde, and /dev/vdf.
Display the servere storage device inventory on the Ceph cluster.
[ceph: root@clienta mnt]#ceph orch device ls --hostname=servere.lab.example.comHostname Path Type Serial Size Health Ident Fault Available servere.lab.example.com /dev/vdb hdd 4f0e87b2-dc76-457a-a 10.7G Unknown N/A N/A No servere.lab.example.com /dev/vdc hdd b8483b8a-13a9-4992-9 10.7G Unknown N/A N/A No servere.lab.example.com /dev/vdd hdd ff073d1f-31d8-477e-9 10.7G Unknown N/A N/A No servere.lab.example.com/dev/vdehdd 705f4daa-f63f-450e-8 10.7G Unknown N/A N/AYesservere.lab.example.com/dev/vdfhdd 63adfb7c-9d9c-4575-8 10.7G Unknown N/A N/AYes
Create the OSDs by using /dev/vde and /dev/vdf on servere.
[ceph: root@clienta mnt]# ceph orch daemon add osd \
servere.lab.example.com:/dev/vde
Created osd(s) 9 on host 'servere.lab.example.com'[ceph: root@clienta mnt]# ceph orch daemon add osd \
servere.lab.example.com:/dev/vdf
Created osd(s) 10 on host 'servere.lab.example.com'Verify that the cluster is in a healthy state and that the OSDs were successfully added.
Verify that the cluster status is HEALTH_OK.
[ceph: root@clienta mnt]# ceph status
cluster:
id: 29179c6e-10ac-11ec-b149-52540000fa0c
health: HEALTH_OK
services:
mon: 4 daemons, quorum serverc.lab.example.com,clienta,servere,serverd (age 12m)
mgr: serverc.lab.example.com.dwsvgt(active, since 12m), standbys: serverd.kdkmia, servere.rdbtge, clienta.etponq
osd: 11 osds: 11 up (since 11m), 11 in (since 29m)
...output omitted...Use the ceph osd tree command to display the CRUSH tree.
Verify that the new OSDs' location in the infrastructure is correct.
[ceph: root@clienta mnt]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.10776 root default
-3 0.02939 host serverc
2 hdd 0.00980 osd.2 up 1.00000 1.00000
5 hdd 0.00980 osd.5 up 1.00000 1.00000
8 hdd 0.00980 osd.8 up 1.00000 1.00000
-7 0.02939 host serverd
1 hdd 0.00980 osd.1 up 1.00000 1.00000
4 hdd 0.00980 osd.4 up 1.00000 1.00000
7 hdd 0.00980 osd.7 up 1.00000 1.00000
-5 0.04898 host servere
0 hdd 0.00980 osd.0 up 1.00000 1.00000
3 hdd 0.00980 osd.3 up 1.00000 1.00000
6 hdd 0.00980 osd.6 up 1.00000 1.00000
9 hdd 0.00980 osd.9 up 1.00000 1.00000
10 hdd 0.00980 osd.10 up 1.00000 1.00000Use the ceph osd df command to verify the data usage and the number of placement groups for each OSD.
[ceph: root@clienta mnt]# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS
2 hdd 0.00980 1.00000 10 GiB 16 MiB 2.1 MiB 0 B 14 MiB 10 GiB 0.15 1.05 38 up
5 hdd 0.00980 1.00000 10 GiB 15 MiB 2.1 MiB 0 B 13 MiB 10 GiB 0.15 1.03 28 up
8 hdd 0.00980 1.00000 10 GiB 20 MiB 2.1 MiB 0 B 18 MiB 10 GiB 0.20 1.37 39 up
1 hdd 0.00980 1.00000 10 GiB 16 MiB 2.1 MiB 0 B 14 MiB 10 GiB 0.16 1.08 38 up
4 hdd 0.00980 1.00000 10 GiB 16 MiB 2.1 MiB 0 B 14 MiB 10 GiB 0.15 1.06 34 up
7 hdd 0.00980 1.00000 10 GiB 15 MiB 2.1 MiB 0 B 13 MiB 10 GiB 0.15 1.04 33 up
0 hdd 0.00980 1.00000 10 GiB 16 MiB 2.1 MiB 0 B 14 MiB 10 GiB 0.15 1.05 22 up
3 hdd 0.00980 1.00000 10 GiB 16 MiB 2.1 MiB 0 B 14 MiB 10 GiB 0.15 1.05 25 up
6 hdd 0.00980 1.00000 10 GiB 16 MiB 2.1 MiB 0 B 14 MiB 10 GiB 0.15 1.05 23 up
9 hdd 0.00980 1.00000 10 GiB 16 MiB 2.1 MiB 0 B 14 MiB 10 GiB 0.15 1.05 24 up
10 hdd 0.00980 1.00000 10 GiB 16 MiB 2.1 MiB 0 B 14 MiB 10 GiB 0.15 1.05 26 up
TOTAL 110 GiB 330 MiB 68 MiB 2.8 KiB 262 MiB 110 GiB 0.29
MIN/MAX VAR: 0.65/1.89 STDDEV: 0.16Return to workstation as the student user.
[ceph: root@clienta mnt]#exit[admin@clienta ~]$exit[student@workstation ~]$
This concludes the guided exercise.