In this exercise, you navigate and query the Red Hat Ceph Storage cluster.
Outcomes
You should be able to navigate and work with services within the Ceph cluster.
As the student user on the workstation machine, use the lab command to prepare your system for this exercise.
[student@workstation ~]$ lab start intro-arch
This command confirms that the Ceph cluster in the classroom is operating.
Procedure 1.1. Instructions
Log in to the admin node, clienta, and view Ceph services.
Log in to clienta as the admin user and switch to the root user.
[student@workstation ~]$ssh admin@clienta...output omitted... [admin@clienta ~]$sudo -i[root@clienta ~]#
Use the ceph orch ls command within the cephadm shell to view the running services.
[root@clienta ~]# cephadm shell -- ceph orch ls
Inferring fsid 2ae6d05a-229a-11ec-925e-52540000fa0c
Inferring config /var/lib/ceph/2ae6d05a-229a-11ec-925e-52540000fa0c/mon.clienta/config
Using recent ceph image...
NAME RUNNING REFRESHED AGE PLACEMENT
alertmanager 1/1 8m ago 2d count:1
crash 4/4 8m ago 2d *
grafana 1/1 8m ago 2d count:1
mgr 4/4 8m ago 2d clienta.lab.example.com;serverc.lab.example.com;serverd.lab.example.com;servere.lab.example.com
mon 4/4 8m ago 2d clienta.lab.example.com;serverc.lab.example.com;serverd.lab.example.com;servere.lab.example.com
node-exporter 4/4 8m ago 2d *
osd.default_drive_group 9/12 8m ago 2d server*
prometheus 1/1 8m ago 2d count:1
rgw.realm.zone 2/2 8m ago 2d serverc.lab.example.com;serverd.lab.example.comUse the cephadm shell command to launch the shell, and then use the ceph orch ps command to view the status of all cluster daemons.
[root@clienta ~]#cephadm shell..._output omitted... [ceph: root@clienta /]#ceph orch psNAME HOST STATUS REFRESHED AGE PORTS VERSION IMAGE ID CONTAINER ID alertmanager.serverc serverc.lab.example.com running (43m) 2m ago 53m *:9093 *:9094 0.20.0 4c997545e699 95767d5df632 crash.serverc serverc.lab.example.com running (43m) 2m ago 53m - 16.2.0-117.el8cp 2142b60d7974 0f19ee9f42fa crash.serverc serverc.lab.example.com running (52m) 7m ago 52m - 16.2.0-117.el8cp 2142b60d7974 036bafc0145c crash.serverd serverd.lab.example.com running (52m) 7m ago 52m - 16.2.0-117.el8cp 2142b60d7974 2e112369ca35 crash.servere servere.lab.example.com running (51m) 2m ago 51m - 16.2.0-117.el8cp 2142b60d7974 3a2b9161c49e grafana.serverc serverc.lab.example.com running (43m) 2m ago 53m *:3000 6.7.4 09cf77100f6a ff674835c5fc mgr.serverc.lab.example.com.ccjsrd serverc.lab.example.com running (43m) 2m ago 54m *:9283 16.2.0-117.el8cp 2142b60d7974 449c4ba94638 mgr.serverc.lvsxza serverc.lab.example.com running (51m) 7m ago 51m *:8443 *:9283 16.2.0-117.el8cp 2142b60d7974 855376edd5f8 mon.serverc.lab.example.com serverc.lab.example.com running (43m) 2m ago 55m - 16.2.0-117.el8cp 2142b60d7974 3e1763669c29 mon.serverc serverc.lab.example.com running (51m) 7m ago 51m - 16.2.0-117.el8cp 2142b60d7974 d56f57a637a8 ...output omitted...
View the cluster health.
Use the ceph health command to view the health of your Ceph cluster.
[ceph: root@clienta /]# ceph health
HEALTH_OKIf the reported cluster status is not HEALTH_OK, the ceph health detail command shows further information about the cause of the health alert.
Use the ceph status command to view the full cluster status.
[ceph: root@clienta /]# ceph status
cluster:
id: 2ae6d05a-229a-11ec-925e-52540000fa0c
health: HEALTH_OK
services:
mon: 4 daemons, quorum serverc.lab.example.com,clienta,serverd,servere (age 10m)
mgr: serverc.lab.example.com.aiqepd(active, since 19m), standbys: clienta.nncugs, serverd.klrkci, servere.kjwyko
osd: 9 osds: 9 up (since 19m), 9 in (since 2d)
rgw: 2 daemons active (2 hosts, 1 zones)
data:
pools: 5 pools, 105 pgs
objects: 221 objects, 4.9 KiB
usage: 156 MiB used, 90 GiB / 90 GiB avail
pgs: 105 active+cleanExplore the Ceph cluster by viewing the cluster components.
Use the ceph mon dump command to view the cluster MON map.
[ceph: root@clienta /]# ceph mon dump
epoch 4
fsid 2ae6d05a-229a-11ec-925e-52540000fa0c
last_changed 2021-10-01T09:33:53.880442+0000
created 2021-10-01T09:30:30.146231+0000
min_mon_release 16 (pacific)
election_strategy: 1
0: [v2:172.25.250.12:3300/0,v1:172.25.250.12:6789/0] mon.serverc.lab.example.com
1: [v2:172.25.250.10:3300/0,v1:172.25.250.10:6789/0] mon.clienta
2: [v2:172.25.250.13:3300/0,v1:172.25.250.13:6789/0] mon.serverd
3: [v2:172.25.250.14:3300/0,v1:172.25.250.14:6789/0] mon.servere
dumped monmap epoch 4Use the ceph mgr stat command to view the cluster MGR status.
[ceph: root@clienta /]# ceph mgr stat
{
"epoch": 32,
"available": true,
"active_name": "serverc.lab.example.com.aiqepd",
"num_standby": 3
}Use the ceph osd pool ls command to view the cluster pools.
[ceph: root@clienta /]# ceph osd pool ls
device_health_metrics
.rgw.root
default.rgw.log
default.rgw.control
default.rgw.metaUse the ceph pg stat command to view Placement Group (PG) status.
[ceph: root@clienta /]# ceph pg stat
105 pgs: 105 active+clean; 4.9 KiB data, 162 MiB used, 90 GiB / 90 GiB availUse the ceph osd status command to view the status of all OSDs.
[ceph: root@clienta /]# ceph osd status
ID HOST USED AVAIL WR OPS WR DATA RD OPS RD DATA STATE
0 serverc.lab.example.com 12.5M 9.98G 0 0 0 0 exists,up
1 serverc.lab.example.com 13.8M 9.98G 0 0 0 0 exists,up
2 serverc.lab.example.com 19.0M 9.97G 0 0 0 0 exists,up
3 serverd.lab.example.com 16.8M 9.97G 0 0 0 0 exists,up
4 servere.lab.example.com 23.8M 9.97G 0 0 0 0 exists,up
5 serverd.lab.example.com 24.0M 9.97G 0 0 0 0 exists,up
6 servere.lab.example.com 12.1M 9.98G 0 0 1 0 exists,up
7 serverd.lab.example.com 15.6M 9.98G 0 0 0 0 exists,up
8 servere.lab.example.com 23.7M 9.97G 0 0 0 0 exists,upUse the ceph osd crush tree command to view the cluster CRUSH hierarchy.
[ceph: root@clienta /]# ceph osd crush tree
ID CLASS WEIGHT TYPE NAME
-1 0.08817 root default
-3 0.02939 host serverc
0 hdd 0.00980 osd.0
1 hdd 0.00980 osd.1
2 hdd 0.00980 osd.2
-5 0.02939 host serverd
3 hdd 0.00980 osd.3
5 hdd 0.00980 osd.5
7 hdd 0.00980 osd.7
-7 0.02939 host servere
4 hdd 0.00980 osd.4
6 hdd 0.00980 osd.6
8 hdd 0.00980 osd.8Return to workstation as the student user.
[ceph: root@clienta /]#exit[root@clienta ~]#exit[admin@clienta ~]$exit[student@workstation ~]$
This concludes the guided exercise.