In this lab, you provide file storage by using the kernel client and deploying a Ceph Metadata Server (MDS).
Outcomes
You should be able to deploy an MDS and use the kernel client to mount the CephFS file system.
The serverc, serverd, and servere nodes are an operational 3-node Ceph cluster.
All three nodes operate as a MON, a MGR, and an OSD host with at least one colocated OSD.
The clienta node is set up as your admin node server and you use it to install the MDS on serverc.
As the student user on the workstation machine, use the lab command to prepare your system for this lab.
[student@workstation ~]$ lab start fileshare-review
Procedure 10.3. Instructions
Log in to clienta as the admin user.
Create the ceph_data and ceph_metadata pools for CephFS.
Create the mycephfs CephFS file system.
From clienta, deploy the MDS to serverc.
Verify that the MDS is up and active.
Verify that the ceph health is OK.
Log in to clienta as the admin user and use sudo to run the cephadm shell.
[student@workstation ~]$ssh admin@clienta[admin@clienta ~]$sudo cephadm shell[ceph: root@clienta /]#
Create the two required CephFS pools.
Name these pools cephfs_data and cephfs_metadata.
[ceph: root@clienta /]#ceph osd pool create cephfs_datapool 'cephfs_data' created [ceph: root@clienta /]#ceph osd pool create cephfs_metadatapool 'cephfs_metadata' created
Create the CephFS file system with the name mycephfs.
Your pool numbers might differ in your lab environment.
[ceph: root@clienta /]# ceph fs new mycephfs cephfs_metadata cephfs_data
new fs with metadata pool 7 and data pool 6Deploy the MDS service on serverc.lab.example.com.
[ceph: root@clienta /]#ceph orch apply mds mycephfs \--placement="1 serverc.lab.example.com"Scheduled mds.mycephfs update...
Verify that the MDS service is active. It can take some time until the MDS service is shown.
[ceph: root@clienta /]#ceph mds statmycephfs:1 {0=mycephfs.serverc.mycctv=up:active}
Verify that the cluster health is OK.
[ceph: root@clienta /]#ceph statuscluster: id: ff97a876-1fd2-11ec-8258-52540000fa0c health: HEALTH_OK services: mon: 4 daemons, quorum serverc.lab.example.com,servere,serverd,clienta (age 2h) mgr: serverc.lab.example.com.btgxor(active, since 2h), standbys: clienta.soxncl, servere.fmyxwv, serverd.ufqxxkmds: 1/1 daemons uposd: 9 osds: 9 up (since 2h), 9 in (since 36h) rgw: 2 daemons active (2 hosts, 1 zones) data: volumes: 1/1 healthy pools: 7 pools, 169 pgs objects: 212 objects, 7.5 KiB usage: 162 MiB used, 90 GiB / 90 GiB avail pgs: 169 active+clean io: client: 1.1 KiB/s wr, 0 op/s rd, 3 op/s wr
On the clienta node, create the /mnt/cephfs-review mount point and mount the CephFS file system as a kernel client.
Exit the cephadm shell.
Verify that the Ceph client key ring is present in the /etc/ceph folder on the client node.
[ceph: root@clienta /]#exitexit [admin@clienta ~]$sudo ls -l /etc/cephtotal 12 -rw-r--r--. 1 root root 63 Sep 27 16:42ceph.client.admin.keyring-rw-r--r--. 1 root root 177 Sep 27 16:42 ceph.conf -rw-------. 1 root root 82 Sep 27 16:42 podman-auth.json
Install the ceph-common package in the client node.
[admin@clienta ~]$ sudo yum install -y ceph-common
...output omitted...
Complete!Create the /mnt/cephfs-review mount point directory.
Mount the new CephFS file system as a kernel client.
[admin@clienta ~]$sudo mkdir /mnt/cephfs-review[admin@clienta ~]$sudo mount.ceph serverc.lab.example.com:/ /mnt/cephfs-review \-o name=admin
Change the ownership of the top-level directory of the mounted file system to user and group admin.
[admin@clienta ~]$ sudo chown admin:admin /mnt/cephfs-reviewCreate a 10 MB test file called cephfs.test1.
Verify that the created data is replicated across all three nodes by showing 30 MB in the cephfs_data pool.
Use the dd command to create one 10 MB file, and then verify that it triples across the OSD nodes.
[admin@clienta ~]$dd if=/dev/zero of=/mnt/cephfs-review/cephfs.test1 \bs=1M count=1010+0 records in 10+0 records out 10485760 bytes (10 MB, 10 MiB) copied, 0.0291862 s, 359 MB/s [admin@clienta ~]$sudo cephadm shell -- ceph fs statusInferring fsid ff97a876-1fd2-11ec-8258-52540000fa0c Inferring config /var/lib/ceph/ff97a876-1fd2-11ec-8258-52540000fa0c/mon.clienta/config Using recent ceph image registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:6306...47ff mycephfs - 1 clients ======== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active mycephfs.serverc.nsihbi Reqs: 0 /s 11 14 12 2 POOL TYPE USED AVAIL cephfs_metadata metadata 120k 28.4Gcephfs_datadata30.0M28.4G MDS version: ceph version 16.2.0-117.el8cp (0e34bb74700060ebfaa22d99b7d2cdc037b28a57) pacific (stable)
Return to workstation as the student user.
This concludes the lab.