In this exercise, you configure a shared file client access with CephFS.
Outcomes
You should be able to deploy a Metadata Server (MDS) and mount a CephFS file system with the kernel client and the Ceph-Fuse client. You should be able to save the file system as persistent storage.
As the student user on the workstation machine, use the lab command to prepare your system for this exercise.
[student@workstation ~]$ lab start fileshare-deploy
Procedure 10.1. Instructions
The serverc, serverd, and servere nodes are an operational 3-node Ceph cluster.
All three nodes operate as a MON, a MGR, and an OSD host with at least one colocated OSD.
The clienta node is your admin node server and you will use it to install the MDS on serverc.
Log in to clienta as the admin user.
Deploy the serverc node as an MDS.
Verify that the MDS is operating and that the ceph_data and ceph_metadata pools for CephFS are created.
Log in to clienta as the admin user, and use sudo to run the cephadm shell.
[student@workstation ~]$ssh admin@clienta[admin@clienta ~]$sudo cephadm shell[ceph: root@clienta /]#
Create the two required CephFS pools.
Name these pools mycephfs_data and mycephfs_metadata.
[ceph: root@clienta /]#ceph osd pool create mycephfs_datapool 'mycephfs_data' created [ceph: root@clienta /]#ceph osd pool create mycephfs_metadatapool 'mycephfs_metadata' created
Create the CephFS file system with the name mycephfs.
Your pool numbers might differ in your lab environment.
[ceph: root@clienta /]# ceph fs new mycephfs mycephfs_metadata mycephfs_data
new fs with metadata pool 7 and data pool 6Deploy the MDS service on serverc.lab.example.com.
[ceph: root@clienta /]#ceph orch apply mds mycephfs \--placement="1 serverc.lab.example.com"Scheduled mds.mycephfs update...
Verify that the MDS service is active.
[ceph: root@clienta /]#ceph mds statmycephfs:1 {0=mycephfs.serverc.mycctv=up:active} [ceph: root@clienta /]#ceph statuscluster: id: 472b24e2-1821-11ec-87d7-52540000fa0c health: HEALTH_OK services: mon: 4 daemons, quorum serverc.lab.example.com,serverd,servere,clienta (age 29m) mgr: servere.xnprpz(active, since 30m), standbys: clienta.jahhir, serverd.qbvejy, serverc.lab.example.com.xgbgpomds: 1/1 daemons uposd: 15 osds: 15 up (since 28m), 15 in (since 28m) rgw: 2 daemons active (2 hosts, 1 zones) data: volumes: 1/1 healthy pools: 7 pools, 169 pgs objects: 212 objects, 7.5 KiB usage: 215 MiB used, 150 GiB / 150 GiB avail pgs: 169 active+clean
List the available pools.
Verify that mycephfs_data and mycephfs_metadata are listed.
[ceph: root@clienta /]#ceph df--- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 90 GiB 90 GiB 129 MiB 129 MiB 0.14 TOTAL 90 GiB 90 GiB 129 MiB 129 MiB 0.14 --- POOLS --- POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL device_health_metrics 1 1 0 B 0 0 B 0 28 GiB .rgw.root 2 32 1.3 KiB 4 48 KiB 0 28 GiB default.rgw.log 3 32 3.6 KiB 177 408 KiB 0 28 GiB default.rgw.control 4 32 0 B 8 0 B 0 28 GiB default.rgw.meta 5 8 0 B 0 0 B 0 28 GiBmycephfs_data6 32 0 B 0 0 B 0 28 GiBmycephfs_metadata7 32 2.3 KiB 22 96 KiB 0 28 GiB
Mount the new CephFS file system on the /mnt/mycephfs directory as a kernel client on the clienta node.
Verify normal operation by creating two folders dir1 and dir2 on the file system.
Create an empty file called atestfile in the dir1 directory and a 10 MB file called ddtest in the same directory.
Exit the cephadm shell.
Switch to the root user.
Verify that the Ceph client key ring is present in the /etc/ceph folder on the client node.
[ceph: root@clienta /]#exitexit [admin@clienta ~]$sudo -i[root@clienta ~]#ls -l /etc/cephtotal 12 -rw-r--r--. 1 root root 63 Sep 17 21:42ceph.client.admin.keyring-rw-r--r--. 1 root root 177 Sep 17 21:42 ceph.conf -rw-------. 1 root root 82 Sep 17 21:42 podman-auth.json
Install the ceph-common package on the client node.
[root@clienta ~]# yum install ceph-common
...output omitted...Create a mount point called /mnt/mycephfs and mount the new CephFS file system.
[root@clienta ~]#mkdir /mnt/mycephfs[root@clienta ~]#mount.ceph serverc.lab.example.com:/ /mnt/mycephfs \-o name=admin
Verify that the mount is successful.
[root@clienta ~]# df /mnt/mycephfs
Filesystem 1K-blocks Used Available Use% Mounted on
172.25.250.12:/ 29822976 0 29822976 0% /mnt/mycephfsCreate two directories called dir1 and dir2, directly underneath the mount point. Ensure that they are available.
[root@clienta ~]#mkdir /mnt/mycephfs/dir1[root@clienta ~]#mkdir /mnt/mycephfs/dir2[root@clienta ~]#ls -al /mnt/mycephfs/total 0 drwxr-xr-x. 4 root root 2 Sep 28 06:04 . drwxr-xr-x. 3 root root 22 Sep 28 05:49 .. drwxr-xr-x. 2 root root 0 Sep 28 06:04 dir1 drwxr-xr-x. 2 root roots 0 Sep 28 06:04 dir2
Create an empty file called atestfile in the dir1 directory. Then, create a 10 MB file called ddtest in the same directory.
[root@clienta ~]#touch /mnt/mycephfs/dir1/atestfile[root@clienta ~]#dd if=/dev/zero of=/mnt/mycephfs/dir1/ddtest \bs=1024 count=1000010000+0 records in 10000+0 records out 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0208212 s, 492 MB/s
Unmount the CephFS file system.
[root@clienta ~]# umount /mnt/mycephfsRun the ceph fs status command and inspect the size of the used data in the mycephfs_data pool.
The larger size is reported because the CephFS file system replicates across the three Ceph nodes.
[root@clienta ~]#cephadm shell -- ceph fs statusInferring fsid 472b24e2-1821-11ec-87d7-52540000fa0c Inferring config /var/lib/ceph/472b24e2-1821-11ec-87d7-52540000fa0c/mon.clienta/config Using recent ceph image registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:6306...a47ff mycephfs - 0 clients ======== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active mycephfs.serverc.mycctv Reqs: 0 /s 14 17 14 0 POOL TYPE USED AVAIL mycephfs_metadata metadata 152k 28.4G mycephfs_data data29.2M28.4G MDS version: ceph version 16.2.0-117.el8cp (0e34bb74700060ebfaa22d99b7d2cdc037b28a57) pacific (stable)
Create a restricteduser user, which has read access to the root folder, and read and write permissions on the dir2 folder.
Use this new user to mount again the CephFS file system on clienta and check the permissions.
Create the restricteduser user with read permission on the root folder, and read and write permissions on the dir2 folder.
Use the cephadm shell --mount option to copy the user key-ring file to the /etc/ceph folder on clienta.
[root@clienta ~]#cephadm shell --mount /etc/ceph/[ceph: root@clienta /]#ceph fs authorize mycephfs client.restricteduser \ / r /dir2 rw[client.restricteduser] key = AQBc315hI7PaBRAA9/9fdmj+wjblK+izstA0aQ== [ceph: root@clienta /]#ceph auth get client.restricteduser \ -o /mnt/ceph.client.restricteduser.keyring[ceph: root@clienta /]#exitexit
Use the kernel client to mount the mycephfs file system with this user.
[root@clienta ~]# mount.ceph serverc.lab.example.com:/ /mnt/mycephfs \
-o name=restricteduser,fs=mycephfsTest the user permissions in the different folders and files.
[root@clienta ~]#tree /mnt/mnt └── mycephfs ├── dir1 │ ├── a3rdfile │ └── ddtest └── dir2 3 directories, 2 files [root@clienta ~]#touch /mnt/mycephfs/dir1/restricteduser_file1touch: cannot touch '/mnt/mycephfs/dir1/restricteduser_file1':Permission denied[root@clienta ~]#touch /mnt/mycephfs/dir2/restricteduser_file2[root@clienta ~]#ls /mnt/mycephfs/dir2restricteduser_file2 [root@clienta ~]#rm /mnt/mycephfs/dir2/restricteduser_file2
Unmount the CephFS file system.
[root@clienta ~]# umount /mnt/mycephfsInstall the ceph-fuse package and mount to a new directory called cephfuse.
Create a directory called /mnt/mycephfuse to use as a mount point for the Fuse client.
[root@clienta ~]# mkdir /mnt/mycephfuseInstall the ceph-fuse package, which is not installed by default.
[root@clienta ~]# yum install ceph-fuse
...output omitted...Use the installed Ceph-Fuse driver to mount the file system.
[root@clienta ~]#ceph-fuse -n client.restricteduser \--client_fs mycephfs /mnt/mycephfuse2021-09-28T06:29:06.205-0400 7fc4f1fdd200 -1 init, newargv = 0x56415adcb160 newargc=15 ceph-fuse[39038]: starting ceph client ceph-fuse[39038]: starting fuse
Run the tree command on the /mnt directory to see its data.
[root@clienta ~]# tree /mnt
/mnt
├── mycephfs
└── mycephfuse
├── dir1
│ ├── atestfile
│ └── ddtest
└── dir2
4 directories, 2 filesUnmount the Ceph-Fuse file system.
[root@clienta ~]# umount /mnt/mycephfuseUse the FUSE client to persistently mount the CephFS in the /mnt/mycephfuse folder.
Configure the /etc/fstab file to mount the file system at startup.
[root@clienta ~]# cat /etc/fstab
...output omitted...
serverc.lab.example.com:/ /mnt/mycephfuse fuse.ceph ceph.id=restricteduser,_netdev 0 0Mount again the /mnt/mycephfuse folder with the mount -a command.
Verify with the df command.
[root@clienta ~]#mount -a2021-09-28T06:33:28.715-0400 7fb5a5e02200 -1 init, newargv = 0x55b327cbaa80 newargc=17 ceph-fuse[39201]: starting ceph client ceph-fuse[39201]: starting fuse [admin@clienta ~]$dfFilesystem 1K-blocks Used Available Use% Mounted on ...output omitted...ceph-fuse 49647616 12288 49635328 1% /mnt/mycephfuse
Unmount the /mnt/mycephfuse folder.
[root@clienta ~]# umount /mnt/mycephfuseReturn to workstation as the student user.
[root@clienta ~]#exit[admin@clienta ~]$exit[student@workstation ~]$
Run the lab finish script on the workstation server so that the clienta node can be safely rebooted without mount conflicts.
This concludes the guided exercise.