After completing this section, you should be able to expand capacity to meet application storage requirements by adding OSDs to an existing cluster.
You can expand the storage capacity of your Red Hat Ceph Storage cluster without disrupting active storage activity. There are two ways to expand the storage in your cluster:
Add additional OSD nodes to the cluster, referred to as scaling out.
Add additional storage space to the existing OSD nodes, referred to as scaling up.
As a storage administrator, you can add more hosts to a Ceph storage cluster to maintain cluster health and provide sufficient load capacity. Add one or more OSDs to expand the storage cluster capacity when the current storage space is becoming full.
As the root user, add the Ceph storage cluster public SSH key to the root user's authorized_keys file on the new host.
[root@adm ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@new-osd-1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/etc/ceph/ceph.pub"
Number of key(s) added: 1
...output omitted...As the root user, add new nodes to the inventory file located in /usr/share/cephadm-ansible/hosts/.
Run the preflight playbook with the --limit option to restrict the playbook's tasks to run only on the nodes specified.
The Ansible Playbook verifies that the nodes to be added meet the prerequisite package requirements.
[root@adm ~]#ansible-playbook -i /usr/share/cephadm-ansible/hosts/ \/usr/share/cephadm-ansible/cephadm-preflight.yml \--limit new-osd-1
Choose one of the methods to add new hosts to the Ceph storage cluster:
As the root user, in the Cephadm shell, use the ceph orch host add command to add a new host to the storage cluster.
In this example, the command also assigns host labels.
[ceph: root@adm /]# ceph orch host add new-osd-1 --labels=mon,osd,mgr
Added host 'new-osd-1' with addr '192.168.122.102'To add multiple hosts, create a YAML file with host descriptions.
Create the YAML file within the admin container where you will then run ceph orch. .
service_type: host addr: hostname: new-osd-1 labels: - mon - osd - mgr --- service_type: host addr: hostname: new-osd-2 labels: - mon - osd
After creating the YAML file, run the ceph orch apply command to add the OSDs:
[ceph: root@adm ~]# ceph orch apply -i host.yaml
Added host 'new-osd-1' with addr '192.168.122.102'
Added host 'new-osd-1' with addr '192.168.122.103'Use ceph orch host ls from the cephadm shell to list the cluster nodes.
The STATUS column is blank when the host is online and operating normally.
[ceph: root@adm /]# ceph orch host ls
HOST ADDR LABELS STATUS
existing-osd-1 192.168.122.101 mon
new-osd-1 192.168.122.102 mon osd mgr
new-osd-2 192.168.122.103 mon osdAs a storage administrator, you can expand the Ceph cluster capacity by adding new storage devices to your existing OSD nodes, then using the cephadm orchestrator to configure them as OSDs. Ceph requires that the following conditions are met to consider a storage device:
The device must not have any partitions.
The device must not have any LVM state.
The device must not be mounted.
The device must not contain a file system.
The device must not contain a Ceph BlueStore OSD.
The device must be larger than 5 GB.
Run the ceph orch device ls command from the cephadm shell to list the available devices.
The --wide option provides more device detail.
[ceph: root@adm /]# ceph orch device ls
Hostname Path Type Serial Size Health Ident Fault Available
osd-1 /dev/vdb hdd 8a8d3399-4da0-b 10.7G Unknown N/A N/A Yes
osd-1 /dev/vdc hdd 8b06b0af-4350-b 10.7G Unknown N/A N/A Yes
osd-1 /dev/vdd hdd e15146bc-4970-a 10.7G Unknown N/A N/A Yes
osd-2 /dev/vdb hdd 82dc7aff-45bb-9 10.7G Unknown N/A N/A Yes
osd-2 /dev/vdc hdd e7f82a83-44f2-b 10.7G Unknown N/A N/A Yes
osd-2 /dev/vdd hdd fc290db7-4636-a 10.7G Unknown N/A N/A Yes
osd-3 /dev/vdb hdd cb17228d-45d3-b 10.7G Unknown N/A N/A Yes
osd-3 /dev/vdc hdd d11bb434-4275-a 10.7G Unknown N/A N/A Yes
osd-3 /dev/vdd hdd 68e406a5-4954-9 10.7G Unknown N/A N/A YesAs the root user, run the ceph orch daemon add osd command to create an OSD using a specific device on a specific host.
[ceph: root@admin /]# ceph orch daemon add osd osd-1:/dev/vdb
Created osd(s) 0 on host 'osd-1'Alternately, run the ceph orch apply osd --all-available-devices command to deploy OSDs on all available and unused devices.
[ceph: root@adm /]# ceph orch apply osd --all-available-devices
Scheduled osd.all-available-devices update...You can create OSDs by using only specific devices on specific hosts by including selective disk properties.
The following example creates two OSDs in the group default_drive_group backed by /dev/vdc and /dev/vdd on each host.
[ceph: root@adm /]# cat /var/lib/ceph/osd/osd_spec.yml
service_type: osd
service_id: default_drive_group
placement:
hosts:
- osd-1
- osd-2
data_devices:
paths:
- /dev/vdc
- /dev/vddRun the ceph orch apply command to implement the configuration in the YAML file.
[ceph: root@adm /]# ceph orch apply -i /var/lib/ceph/osd/osd_spec.yml
Scheduled osd.default_drive_group update...For more information, refer to the Adding hosts chapter in the Red Hat Ceph Storage Installation Guide at https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5/html-single/installation_guide/index#adding-hosts_install
For more information, refer to the Management of OSDs using the Ceph Orchestrator chapter in the Red Hat Ceph Storage Operations Guide at https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5/html-single/operations_guide/index#management-of-osds-using-the-ceph-orchestrator