In this lab, you query and modify Red Hat Ceph Storage configuration settings.
Outcomes
You should be able to configure cluster settings.
As the student user on the workstation machine, use the lab command to prepare your system for this lab.
[student@workstation ~]$ lab start configure-review
This command confirms that the required hosts for this exercise are accessible.
Procedure 3.4. Instructions
Configure Ceph cluster settings using both the command line and Ceph Dashboard GUI. View MON settings and configure firewall rules for MON and RGW nodes.
Configure your Red Hat Ceph Storage cluster settings.
Set mon_data_avail_warn to 15 and mon_max_pg_per_osd to 400.
These changes must persist across cluster restarts.
Log in to clienta as the admin user and use sudo to run the cephadm shell.
Configure mon_data_avail_warn to 15 and mon_max_pg_per_osd to 400.
[student@workstation ~]$ssh admin@clienta[admin@clienta ~]$sudo cephadm shell[ceph: root@clienta /]#ceph config set mon mon_data_avail_warn 15[ceph: root@clienta /]#ceph config set mon mon_max_pg_per_osd 400
Verify the new values for each setting.
[ceph: root@clienta /]#ceph config get mon.serverc mon_data_avail_warn15 [ceph: root@clienta /]#ceph config get mon.serverc mon_max_pg_per_osd400
Configure the mon_data_avail_crit setting to 10 by using the Ceph Dashboard GUI.
Open a web browser and go to https://serverc:8443.
If necessary, accept the certificate warning.
If the URL redirects to the active MGR node, you might need to accept the certificate warning again.
Log in as the admin user, with redhat as the password.
In the Ceph Dashboard web UI, click → to display the page.
Select the advanced option from the menu to view advanced configuration settings.
Type mon_data_avail_crit in the search bar.
Click mon_data_avail_crit and then click .
Set the global value to 10 and then click .
Verify that a message indicates that the configuration option is updated.
Display the MON map and view the cluster quorum status.
Display the MON map.
[ceph: root@clienta /]# ceph mon dump
epoch 4
fsid 472b24e2-1821-11ec-87d7-52540000fa0c
last_changed 2021-09-20T01:41:44.138014+0000
created 2021-09-18T01:39:57.614592+0000
min_mon_release 16 (pacific)
election_strategy: 1
0: [v2:172.25.250.12:3300/0,v1:172.25.250.12:6789/0] mon.serverc.lab.example.com
1: [v2:172.25.250.13:3300/0,v1:172.25.250.13:6789/0] mon.serverd
2: [v2:172.25.250.14:3300/0,v1:172.25.250.14:6789/0] mon.servere
3: [v2:172.25.250.10:3300/0,v1:172.25.250.10:6789/0] mon.clienta
dumped monmap epoch 4Display the cluster quorum status.
[ceph: root@clienta /]#ceph mon state4: 4 mons at {clienta=[v2:172.25.250.10:3300/0,v1:172.25.250.10:6789/0], serverc.lab.example.com=[v2:172.25.250.12:3300/0,v1:172.25.250.12:6789/0], serverd=[v2:172.25.250.13:3300/0,v1:172.25.250.13:6789/0], servere=[v2:172.25.250.14:3300/0,v1:172.25.250.14:6789/0]}, election epoch 66, leader 0 serverc.lab.example.com,quorum 0,1,2,3 serverc.lab.example.com,serverd,servere,clienta
Configure firewall rules for the MON and RGW nodes on serverd.
Exit the cephadm shell.
Log in to serverd as the admin user and switch to the root user.
Configure a firewall rule for the MON node on serverd.
[ceph: root@clienta /]#exitexit [admin@servera ~]$ssh serverdadmin@serverd's password:redhat[admin@serverd ~]$sudo -i[root@serverd ~]#firewall-cmd --zone=public --add-service=ceph-monsuccess [root@serverd ~]#firewall-cmd --zone=public --add-service=ceph-mon --permanentsuccess
Configure a firewall rule for the RGW node on serverd.
[root@serverd ~]#firewall-cmd --zone=public --add-port=7480/tcpsuccess [root@serverd ~]#firewall-cmd --zone=public --add-port=7480/tcp --permanentsuccess
Return to workstation as the student user.
This concludes the lab.