In this exercise, you will install a Red Hat Ceph Storage cluster.
Outcomes
You should be able to install a containerized Ceph cluster by using a service specification file.
As the student user on the workstation machine, use the lab command to prepare your system for this exercise.
[student@workstation ~]$ lab start deploy-deploy
This command confirms that the local container registry for the classroom is running and deletes the prebuilt Ceph cluster so it can be redeployed with the steps in this exercise.
This lab start script immediately deletes the prebuilt Ceph cluster and takes a few minutes to complete. Wait for the command to finish before continuing.
Procedure 2.1. Instructions
Log in to serverc as the admin user and switch to the root user.
[student@workstation ~]$ssh admin@serverc[admin@serverc ~]$sudo -i[root@serverc ~]#
Install the cephadm-ansible package, create the inventory file, and run the cephadm-preflight.yml playbook to prepare cluster hosts.
Install the cephadm-ansible package in serverc.
[root@serverc ~]# yum install cephadm-ansible
...output omitted...
Complete!Create the hosts inventory in the /usr/share/cephadm-ansible directory.
[root@serverc ~]# cd /usr/share/cephadm-ansible[root@serverc cephadm-ansible]# cat hosts
clienta.lab.example.com
serverc.lab.example.com
serverd.lab.example.com
servere.lab.example.comRun the cephadm-preflight.yml playbook.
[root@serverc cephadm-ansible]# ansible-playbook -i hosts \
cephadm-preflight.yml --extra-vars "ceph_origin="
...output omitted...The ceph_origin variable is set to empty, which causes some playbook tasks to be skipped because, in this classroom, the Ceph packages are installed from a local classroom repository.
For your production environment, set ceph_origin to rhcs to enable the Red Hat Storage Tools repository for your supported deployment.
Review the initial-config-primary-cluster.yaml file in the /root/ceph/ directory.
--- service_type: hostaddr: 172.25.250.10 hostname: clienta.lab.example.com --- service_type: host addr: 172.25.250.12 hostname: serverc.lab.example.com --- service_type: host addr: 172.25.250.13 hostname: serverd.lab.example.com --- service_type: host addr: 172.25.250.14 hostname: servere.lab.example.com --- service_type: mon
placement: hosts: - clienta.lab.example.com - serverc.lab.example.com - serverd.lab.example.com - servere.lab.example.com --- service_type: rgw
service_id: realm.zone placement: hosts: - serverc.lab.example.com - serverd.lab.example.com --- service_type: mgr
placement: hosts: - clienta.lab.example.com - serverc.lab.example.com - serverd.lab.example.com - servere.lab.example.com --- service_type: osd
service_id: default_drive_group placement:
host_pattern: 'server*' data_devices: paths: - /dev/vdb - /dev/vdc - /dev/vdd
The | |
The Ceph Orchestrator deploys one monitor daemon by default. In the file the | |
The | |
The | |
The | |
Defines where and how to deploy the daemons. |
As the root user on the serverc node, run the cephadm bootstrap command to create the Ceph cluster.
Use the service specification file located at initial-config-primary-cluster.yaml
[root@serverc ~]# cd /root/ceph[root@serverc ceph]# cephadm bootstrap --mon-ip=172.25.250.12 \
--apply-spec=initial-config-primary-cluster.yaml \
--initial-dashboard-password=redhat \
--dashboard-password-noupdate \
--allow-fqdn-hostname \
--registry-url=registry.lab.example.com \
--registry-username=registry \
--registry-password=redhat
...output omitted...
Ceph Dashboard is now available at:
URL: https://serverc.lab.example.com:8443/
User: admin
Password: redhat
Applying initial-config-primary-cluster.yaml to cluster
Adding ssh key to clienta.lab.example.com
Adding ssh key to serverd.lab.example.com
Adding ssh key to servere.lab.example.com
Added host 'clienta.lab.example.com' with addr '172.25.250.10'
Added host 'serverc.lab.example.com' with addr '172.25.250.12'
Added host 'serverd.lab.example.com' with addr '172.25.250.13'
Added host 'servere.lab.example.com' with addr '172.25.250.14'
Scheduled mon update...
Scheduled rgw.realm.zone update...
Scheduled mgr update...
Scheduled osd.default_drive_group update...
You can access the Ceph CLI with:
sudo /usr/sbin/cephadm shell --fsid 8896efec-21ea-11ec-b6fe-52540000fa0c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
Please consider enabling telemetry to help improve Ceph:
ceph telemetry on
For more information see:
https://docs.ceph.com/docs/pacific/mgr/telemetry/
Bootstrap complete.Verify the status of the Ceph storage cluster.
Run the cephadm shell.
[root@serverc ~]# cephadm shell
...output omitted...
[ceph: root@serverc /]#Verify that the cluster status is HEALTH_OK.
[ceph: root@serverc /]#ceph statuscluster: id: 8896efec-21ea-11ec-b6fe-52540000fa0c health:HEALTH_OKservices: mon: 4 daemons, quorum serverc.lab.example.com,serverd,servere,clienta (age 10s) mgr: serverc.lab.example.com.bypxer(active, since 119s), standbys: serverd.lflgzj, clienta.hloibd, servere.jhegip osd: 9 osds: 9 up (since 55s), 9 in (since 75s) data: pools: 1 pools, 1 pgs objects: 0 objects, 0 B usage: 47 MiB used, 90 GiB / 90 GiB avail pgs: 1 active+clean
Your cluster might be in the HEALTH_WARN state for a few minutes until all services and OSDs are ready.
Label clienta as the admin node.
Verify that you can execute cephadm commands from clienta.
Apply the _admin label to clienta to label it as the admin node.
[ceph: root@serverc /]# ceph orch host label add clienta.lab.example.com _admin
Added label _admin to host clienta.lab.example.comManually copy the ceph.conf and ceph.client.admin.keyring files from serverc to clienta.
These files are located in /etc/ceph.
[ceph: root@serverc /]#exitexit [root@serverc ceph]#cd /etc/ceph[root@serverc ceph]#scp {ceph.client.admin.keyring,ceph.conf} \ root@clienta:/etc/ceph/Warning: Permanently added 'clienta' (ECDSA) to the list of known hosts. ceph.client.admin.keyring 100% 63 105.6KB/s 00:00 ceph.conf 100% 177 528.3KB/s 00:00
Return to workstation as the student user, then log into clienta as the admin user and start the cephadm shell.
Verify that you can execute cephadm commands from clienta.
[root@serverc ceph]#exit[admin@serverc ~]$exit[student@workstation ~]$ssh admin@clienta[admin@clienta ~]$sudo cephadm shellInferring fsid 8896efec-21ea-11ec-b6fe-52540000fa0c Inferring config /var/lib/ceph/8896efec-21ea-11ec-b6fe-52540000fa0c/mon.clienta/config Using recent ceph image registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:6306...47ff [ceph: root@clienta /]#ceph healthHEALTH_OK
Return to workstation as the student user.
[ceph: root@clienta /]#exit[root@clienta ~]#exit[admin@clienta ~]$exit[student@workstation ~]$
This concludes the guided exercise.