In this review, you deploy a Red Hat Ceph Storage cluster using a service specification file.
Outcomes
You should be able to deploy a Red Hat Ceph Storage cluster using a service specification file.
If you did not reset your classroom virtual machines at the end of the last chapter, save any work you want to keep from earlier exercises on those machines and reset the classroom environment now.
Reset your environment before performing this exercise. All comprehensive review labs start with a clean, initial classroom environment that includes a pre-built, fully operational Ceph cluster. This first comprehensive review will remove that cluster, but still requires the rest of the clean classroom environment.
As the student user on the workstation machine, use the lab command to prepare your system for this exercise.
[student@workstation ~]$ lab start comprehensive-review1
This command confirms that the local container registry for the classroom is running and deletes the prebuilt Ceph cluster so it can be redeployed with the steps in this exercise.
This lab start script immediately deletes the prebuilt Ceph cluster and takes a few minutes to complete. Wait for the command to finish before continuing.
Specifications
Deploy a four node Red Hat Ceph Storage cluster using a service specification file with these parameters:
Use the registry at registry.lab.example.com with the username registry and the password redhat.
Deploy MONs on the clienta, serverc, serverd, and servere nodes.
Deploy RGWs on the serverc and serverd nodes, with the service_id set to realm.zone.
Deploy MGRs on the clienta, serverc, serverd, and servere nodes.
Deploy OSDs on the serverc, serverd, and servere nodes, with the service_id set to default_drive_group.
On all OSD nodes, use the /dev/vdb, /dev/vdc, and /dev/vdd drives as data devices.
| Hostname | IP Address |
|---|---|
clienta.lab.example.com
|
172.25.250.10
|
serverc.lab.example.com
|
172.25.250.12
|
serverd.lab.example.com
|
172.25.250.13
|
servere.lab.example.com
|
172.25.250.14
|
After the cluster is installed, manually add the /dev/vde and /dev/vdf drives as data devices on the servere node.
Set the OSD journal size to 1024 MiB.
Use 172.25.250.0/24 for the OSD public network, and 172.25.249.0/24 for the OSD cluster network.
Using the serverc host as the bootstrap host, install the cephadm-ansible package, create the inventory file, and run the pre-flight playbook to prepare cluster hosts.
`.
On the serverc host, install the cephadm-ansible package.
[student@workstation ansible]$ssh admin@serverc[admin@serverc ~]$sudo -i[root@serverc ~]#yum install cephadm-ansible...output omitted... Complete!
Create the hosts inventory file in the /usr/share/cephadm-ansible directory.
[root@serverc ~]#cd /usr/share/cephadm-ansible[root@serverc cephadm-ansible]#cat hostsclienta.lab.example.com serverc.lab.example.com serverd.lab.example.com servere.lab.example.com
Run the cephadm-preflight.yml playbook.
[root@serverc cephadm-ansible]# ansible-playbook -i hosts \
cephadm-preflight.yml --extra-vars "ceph_origin="
...output omitted...The ceph_origin variable is set to empty, which causes some playbooks tasks to be skipped because, in this classroom, the Ceph packages are installed from a local classroom repository.
In a production environment, set ceph_origin to rhcs to enable the Red Hat Storage Tools repository for your supported deployment.
On the serverc host, create the initial-config-primary-cluster.yaml cluster service specification file in the /root/ceph directory.
Include four hosts with the following specifications:
Deploy MONs on clienta, serverc, serverd, and servere.
Deploy RGWs on serverc and serverd, with the service_id set to realm.zone.
Deploy MGRs on clienta, serverc, serverd, and servere.
Deploy OSDs on the serverc, serverd, and servere nodes, with the service_id set to default_drive_group.
On all OSD nodes, use the /dev/vdb, /dev/vdc, and /dev/vdd drives as data devices.
| Hostname | IP Address |
|---|---|
clienta.lab.example.com
|
172.25.250.10
|
serverc.lab.example.com
|
172.25.250.12
|
serverd.lab.example.com
|
172.25.250.13
|
servere.lab.example.com
|
172.25.250.14
|
Create the initial-config-primary-cluster.yaml cluster service specification file in the /root/ceph directory.
[root@serverc cephadm-ansible]#cd /root/ceph[root@serverc ceph]#cat initial-config-primary-cluster.yamlservice_type: host addr: 172.25.250.10 hostname: clienta.lab.example.com --- service_type: host addr: 172.25.250.12 hostname: serverc.lab.example.com --- service_type: host addr: 172.25.250.13 hostname: serverd.lab.example.com --- service_type: host addr: 172.25.250.14 hostname: servere.lab.example.com --- service_type: mon placement: hosts: - clienta.lab.example.com - serverc.lab.example.com - serverd.lab.example.com - servere.lab.example.com --- service_type: rgw service_id: realm.zone placement: hosts: - serverc.lab.example.com - serverd.lab.example.com --- service_type: mgr placement: hosts: - clienta.lab.example.com - serverc.lab.example.com - serverd.lab.example.com - servere.lab.example.com --- service_type: osd service_id: default_drive_group placement: host_pattern: 'server*' data_devices: paths: - /dev/vdb - /dev/vdc - /dev/vdd
As the root user on the serverc host, bootstrap the Ceph cluster using the created service specification file.
Set the Ceph dashboard password to redhat and use the --dashboard-password-noupdate option.
Use the --allow-fqdn-hostname to use fully qualified domain names for the hosts.
The registry URL is registry.lab.example.com, the username is registry, and the password is redhat.
As the root user on the serverc host, run the cephadm bootstrap command with the provided parameters to bootstrap the Ceph cluster.
Use the created service specification file.
[root@serverc ceph]# cephadm bootstrap --mon-ip=172.25.250.12 \
--apply-spec=initial-config-primary-cluster.yaml \
--initial-dashboard-password=redhat \
--dashboard-password-noupdate \
--allow-fqdn-hostname \
--registry-url=registry.lab.example.com \
--registry-username=registry \
--registry-password=redhat
...output omitted...
Ceph Dashboard is now available at:
URL: https://serverc.lab.example.com:8443/
User: admin
Password: redhat
Applying initial-config-primary-cluster.yaml to cluster
Adding ssh key to clienta.lab.example.com
Adding ssh key to serverd.lab.example.com
Adding ssh key to servere.lab.example.com
Added host 'clienta.lab.example.com' with addr '172.25.250.10'
Added host 'serverc.lab.example.com' with addr '172.25.250.12'
Added host 'serverd.lab.example.com' with addr '172.25.250.13'
Added host 'servere.lab.example.com' with addr '172.25.250.14'
Scheduled mon update...
Scheduled rgw.realm.zone update...
Scheduled mgr update...
Scheduled osd.default_drive_group update...
You can access the Ceph CLI with:
sudo /sbin/cephadm shell --fsid cd6a42ce-36f6-11ec-8c67-52540000fa0c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
Please consider enabling telemetry to help improve Ceph:
ceph telemetry on
For more information see:
https://docs.ceph.com/docs/pacific/mgr/telemetry/
Bootstrap complete.As the root user on the serverc host, run the cephadm shell.
[root@serverc ceph]# cephadm shellVerify that the cluster status is HEALTH_OK.
Wait until the cluster reaches the HEALTH_OK status.
[ceph: root@serverc /]#ceph statuscluster: id: cd6a42ce-36f6-11ec-8c67-52540000fa0c health:HEALTH_OKservices: mon: 1 daemons, quorum serverc.lab.example.com (age 2m) mgr: serverc.lab.example.com.anabtp(active, since 91s), standbys: clienta.trffqp osd: 9 osds: 9 up (since 21s), 9 in (since 46s) data: pools: 1 pools, 1 pgs objects: 0 objects, 0 B usage: 47 MiB used, 90 GiB / 90 GiB avail pgs: 1 active+clean
Label the clienta host as the admin node.
Manually copy the ceph.conf and ceph.client.admin.keyring files to the admin node.
On the admin node, test the cephadm shell.
Label the clienta host as the admin node.
[ceph: root@serverc /]# ceph orch host label add clienta.lab.example.com _admin
Added label _admin to host clienta.lab.example.comCopy the ceph.conf and ceph.client.admin.keyring files from the serverc host to the clienta host.
Locate these files in /etc/ceph on both hosts.
[ceph: root@serverc /]#exitexit [root@serverc ceph]#cd /etc/ceph[root@serverc ceph]#scp {ceph.client.admin.keyring,ceph.conf} \ root@clienta:/etc/ceph/Warning: Permanently added 'clienta' (ECDSA) to the list of known hosts. ceph.client.admin.keyring 100% 63 105.6KB/s 00:00 ceph.conf 100% 177 528.3KB/s 00:00
On the admin node, test the cephadm shell.
[root@serverc ceph]#exitlogout [admin@serverc ~]$exitConnection to serverc closed. [student@workstation ~]$ssh admin@clienta[admin@clienta ~]$sudo cephadm shellInferring fsid cd6a42ce-36f6-11ec-8c67-52540000fa0c Inferring config /var/lib/ceph/cd6a42ce-36f6-11ec-8c67-52540000fa0c/mon.clienta/config Using recent ceph image registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:6306...47ff [ceph: root@clienta /]#ceph healthHEALTH_OK
Manually add OSDs to the servere node using devices /dev/vde and /dev/vdf.
Set 172.25.250.0/24 for the OSD public network and 172.25.249.0/24 for the OSD cluster network.
Display the servere node storage device inventory on the Ceph cluster.
Verify that the /dev/vde and /dev/vdf devices are available.
[ceph: root@clienta /]#ceph orch device ls --hostname=servere.lab.example.comHostname Path Type Serial Size Health Ident Fault Available servere.lab.example.com/dev/vdehdd 4d212d34-e5a0-4347-9 10.7G Unknown N/A N/AYesservere.lab.example.com/dev/vdfhdd d86b1a78-10b5-46af-9 10.7G Unknown N/A N/AYesservere.lab.example.com /dev/vdb hdd 1880975e-c78f-4347-8 10.7G Unknown N/A N/A No servere.lab.example.com /dev/vdc hdd 2df15dd0-8eb6-4425-8 10.7G Unknown N/A N/A No servere.lab.example.com /dev/vdd hdd 527656ac-8c51-47b2-9 10.7G Unknown N/A N/A No
Create the OSDs using the /dev/vde and /dev/vdf devices on the servere node.
[ceph: root@clienta /]#ceph orch daemon add osd servere.lab.example.com:/dev/vdeCreated osd(s) 9 on host 'servere.lab.example.com' [ceph: root@clienta /]#ceph orch daemon add osd servere.lab.example.com:/dev/vdfCreated osd(s) 10 on host 'servere.lab.example.com'
For the OSD options, set public_network to 172.25.250.0/24 and cluster_network to 172.25.249.0/24.
[ceph: root@clienta /]#ceph config set osd public_network 172.25.250.0/24[ceph: root@clienta /]#ceph config set osd cluster_network 172.25.249.0/24
Return to workstation as the student user.
[ceph: root@clienta /]#exit[admin@clienta ~]$exit[student@workstation ~]$
This concludes the lab.