In this lab, you will deploy a Red Hat Ceph Storage cluster.
Outcomes
You should be able to deploy a new Ceph cluster.
As the student user on the workstation machine, use the lab command to prepare your system for this exercise.
[student@workstation ~]$ lab start deploy-review
This command confirms that the local container registry for the classroom is running and deletes the prebuilt Ceph cluster so it can be redeployed with the steps in this exercise.
This lab start script immediately deletes the prebuilt Ceph cluster and takes a few minutes to complete. Wait for the command to finish before continuing.
Procedure 2.3. Instructions
Deploy a new cluster with serverc, serverd, and servere as MON, MGR, and OSD nodes.
Use serverc as the deployment bootstrap node.
Add OSDs to the cluster after the cluster deploys.
Use serverc as the bootstrap node.
Log in to serverc as the admin user and switch to the root user.
Run the cephadm-preflight.yml playbook to prepare the cluster hosts.
Log in to serverc as the admin user and switch to the root user.
[student@workstation ~]$ssh admin@serverc[admin@serverc ~]$sudo -i[root@serverc ~]#
Run the cephadm-preflight.yml playbook to prepare the cluster hosts.
[root@serverc ~]#cd /usr/share/cephadm-ansible[root@serverc cephadm-ansible]#ansible-playbook -i /tmp/hosts \ cephadm-preflight.yml --extra-vars "ceph_origin="...output omitted...
Create a services specification file called initial-cluster-config.yaml.
Using the following template, add hosts serverd.lab.example.com and servere.lab.example.com, with their IP addresses, as service_type: host.
Add serverc and serverd to the mon and mgr sections.
---
service_type: host
addr: 172.25.250.12
hostname: serverc.lab.example.com
---
service_type: mon
placement:
hosts:
- serverc.lab.example.com
---
service_type: mgr
placement:
hosts:
- serverc.lab.example.com
---
service_type: osd
service_id: default_drive_group
placement:
host_pattern: 'server*'
data_devices:
paths:
- /dev/vdb
- /dev/vdc
- /dev/vddCreate a services specification file called /tmp/initial-cluster-config.yaml.
[root@serverc cephadm-ansible]# cat /tmp/initial-cluster-config.yaml
---
service_type: host
addr: 172.25.250.12
hostname: serverc.lab.example.com
---
service_type: host
addr: 172.25.250.13
hostname: serverd.lab.example.com
---
service_type: host
addr: 172.25.250.14
hostname: servere.lab.example.com
---
service_type: mon
placement:
hosts:
- serverc.lab.example.com
- serverd.lab.example.com
- servere.lab.example.com
---
service_type: mgr
placement:
hosts:
- serverc.lab.example.com
- serverd.lab.example.com
- servere.lab.example.com
---
service_type: osd
service_id: default_drive_group
placement:
host_pattern: 'server*'
data_devices:
paths:
- /dev/vdb
- /dev/vdc
- /dev/vddCreate the Ceph cluster by using the initial-cluster-config.yaml service specification file.
Verify that the cluster was successfully deployed.
Run the cephadm bootstrap command to create the Ceph cluster.
Use the initial-cluster-config.yaml service specification file that you just created.
[root@serverc cephadm-ansible]# cephadm bootstrap --mon-ip=172.25.250.12 \
--apply-spec=/tmp/initial-cluster-config.yaml \
--initial-dashboard-password=redhat \
--dashboard-password-noupdate \
--allow-fqdn-hostname \
--registry-url=registry.redhat.io \
--registry-username=registry \
--registry-password=redhat
...output omitted...
Ceph Dashboard is now available at:
URL: https://serverc.lab.example.com:8443/
User: admin
Password: redhat
Applying /tmp/initial-cluster-config.yaml to cluster
Adding ssh key to serverd.lab.example.com
Adding ssh key to servere.lab.example.com
Added host 'serverc.lab.example.com' with addr '172.25.250.12'
Added host 'serverd.lab.example.com' with addr '172.25.250.13'
Added host 'servere.lab.example.com' with addr '172.25.250.14'
Scheduled mon update...
Scheduled mgr update...
Scheduled osd.default_drive_group update...
You can access the Ceph CLI with:
sudo /sbin/cephadm shell --fsid 0bbab748-30ee-11ec-abc4-52540000fa0c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
...output omitted...
Bootstrap complete.Using the cephadm shell, verify that the cluster was successfully deployed.
Wait for the cluster to finish deploying and reach the HEALTH_OK status.
[root@serverc cephadm-ansible]#cephadm shell...output omitted... [ceph: root@serverc /]#ceph statuscluster: id: 0bbab748-30ee-11ec-abc4-52540000fa0c health: HEALTH_OK services:mon: 3 daemons, quorum serverc.lab.example.com,servere,serverd(age 2m)mgr: serverc.lab.example.com.blxerd(active, since 3m), standbys: serverd.nibyts, servere.rkpsiiosd: 9 osds: 9 up (since 2m), 9 in (since 2m)data: pools: 1 pools, 1 pgs objects: 0 objects, 0 B usage: 46 MiB used, 90 GiB / 90 GiB avail pgs: 1 active+clean
Expand the cluster by adding OSDs to serverc, serverd, and servere.
Use the following service specification file.
service_type: osd
service_id: default_drive_group
placement:
hosts:
- serverc.lab.example.com
- serverd.lab.example.com
- servere.lab.example.com
data_devices:
paths:
- /dev/vde
- /dev/vdfCreate a service specification file called osd-spec.yaml.
[ceph: root@serverc /]# cat /tmp/osd-spec.yaml
service_type: osd
service_id: default_drive_group
placement:
hosts:
- serverc.lab.example.com
- serverd.lab.example.com
- servere.lab.example.com
data_devices:
paths:
- /dev/vde
- /dev/vdfUse the ceph orch apply command to add the OSDs to the cluster OSD nodes.
[ceph: root@serverc /]# ceph orch apply -i /tmp/osd-spec.yaml
Scheduled osd.default_drive_group update...Verify that the OSDs were added. Wait for the new OSDs to display as up and in.
[ceph: root@serverc /]#ceph statuscluster: id: 0bbab748-30ee-11ec-abc4-52540000fa0c health: HEALTH_OK services: mon: 3 daemons, quorum serverc.lab.example.com,servere,serverd (age 5m) mgr: serverc.lab.example.com.blxerd(active, since 6m), standbys: serverd.nibyts, servere.rkpsiiosd:data: pools: 1 pools, 1 pgs objects: 0 objects, 0 B usage: 83 MiB used, 150 GiB / 150 GiB avail pgs: 1 active+clean15 osds:15 up(since 10s),15 in(since 27s)
Return to workstation as the student user.
This concludes the lab.