In this review, you will deploy CephFS on an existing Red Hat Ceph Storage cluster using specified requirements.
Outcomes
You should be able to deploy a Metadata Server, provide storage with CephFS, and configure clients for its use.
If you did not reset your classroom virtual machines at the end of the last chapter, save any work you want to keep from earlier exercises on those machines and reset the classroom environment now.
Reset your environment before performing this exercise. All comprehensive review labs start with a clean, initial classroom environment that includes a pre-built, fully operational Ceph cluster. All remaining comprehensive reviews use the default Ceph cluster provided in the initial classroom environment.
As the student user on the workstation machine, use the lab command to prepare your system for this exercise.
[student@workstation ~]$ lab start comprehensive-review3
This command ensures that all cluster hosts are reachable.
Specifications
Create a CephFS file system cl260-fs.
Create an MDS service called cl260-fs with two MDS instances, one on the serverc node and another on the serverd node.
Create a data pool called cephfs.cl260-fs.data and a metadata pool called cephfs.cl260-fs.meta.
Use replicated as the type for both pools.
Mount the CephFS file system to the /mnt/cephfs directory on the clienta host and owned by the admin user.
Save the client.admin key-ring to the /root/secretfile and use the file to authenticate the mount operation.
Create the ceph01 and ceph02 directories.
Create an empty file called firstfile in the ceph01 directory.
Verify the directories and its contents are owned by the admin user.
Modify the ceph.dir.layout.stripe_count layout attribute for the /mnt/cephfs/dir1 directory.
Verify that new files created with the directory inherit the attribute.
Use the ceph-fuse client to mount a new directory called /mnt/cephfuse.
Configure the CephFS file system to be mounted on each system startup.
Verify that the /etc/fstab file is updated accordingly.
Create a CephFS file system cl260-fs.
Create an MDS service called cl260-fs with an MDS instance on serverc and another on serverd.
Create a data pool called cephfs.cl260-fs.data and a metadata pool called cephfs.cl260-fs.meta.
Use replicated as the type for both pools.
Verify that the MDS service is up and running.
Log in to clienta and use sudo to run the cephadm shell.
[student@workstation ~]$ssh admin@clienta[admin@clienta ~]$sudo cephadm shell[ceph: root@clienta /]#
Create a data pool called cephfs.cl260-fs.data and a metadata pool called cephfs.cl260-fs.meta for the CephFS service.
[ceph: root@clienta /]#ceph osd pool create cephfs.cl260-fs.datapool 'cephfs.cl260-fs.data' created [ceph: root@clienta /]#ceph osd pool create cephfs.cl260-fs.metapool 'cephfs.cl260-fs.meta' created [ceph: root@clienta /]#ceph osd pool ls...output omitted... cephfs.cl260-fs.data cephfs.cl260-fs.meta
Create a CephFS file system called cl260-fs.
[ceph: root@clienta /]#ceph fs new cl260-fs cephfs.cl260-fs.data \ cephfs.cl260-fs.metanew fs with metadata pool 14 and data pool 15 [ceph: root@clienta /]#ceph fs lsname: cl260-fs, metadata pool: cephfs.cl260-fs.data, data pools: [cephfs.cl260-fs.meta ]
Create an MDS service called cl260-fs with an MDS instance on serverc.
[ceph: root@clienta /]#ceph orch apply mds cl260-fs \ --placement="2 serverc.lab.example.com serverd.lab.example.com"Scheduled mds.cl260-fs update... [ceph: root@clienta /]#ceph orch ps --daemon-type mdsNAME HOST STATUS REFRESHED ... mds.cl260-fs.serverc.iuwwzt serverc.lab.example.com running (53s) 46s ago ... mds.cl260-fs.serverd.lapeyj serverd.lab.example.com running (50s) 47s ago ...
Verify that the MDS service is up and running.
[ceph: root@clienta /]#ceph mds statcl260-fs:1 {0=cl260-fs.serverd.lapeyj=up:active} 1 up:standby [ceph: root@clienta /]#ceph statuscluster: id: 2ae6d05a-229a-11ec-925e-52540000fa0c health: HEALTH_OK services: ...output omitted...mds: 1/1 daemons up, 1 standby...output omitted...
Install the ceph-common package.
Mount the CephFS file system to the /mnt/cephfs directory on the clienta host.
Save the key-ring associated with the client.admin user to the /root/secretfile file.
Use this file to authenticate the mount operation.
Verify that the /mnt/cephfs directory is owned by the admin user.
Exit the cephadm shell and switch to the root user.
Install the ceph-common package.
[ceph: root@clienta /]#exitexit [admin@clienta ~]$sudo -i[root@clienta ~]#yum install -y ceph-common...output omitted...
Extract the key-ring associated with the client.admin user, and save it in the /root/secretfile file.
[root@clienta ~]# ceph auth get-key client.admin | tee /root/secretfile
...output omitted...Create a new directory called /mnt/cephfs to use as a mount point for the CephFS file system.
Mount your new CephFS file system on that directory.
[root@clienta ~]#mkdir /mnt/cephfs[root@clienta ~]#mount -t ceph serverc:/ /mnt/cephfs \ -o name=admin,secretfile=/root/secretfile
Verify the mount.
[root@clienta ~]# df /mnt/cephfs
Filesystem 1K-blocks Used Available Use% Mounted on
172.25.250.12:/ 29790208 0 29790208 0% /mnt/cephfsChange the ownership of the top-level directory of the mounted file system to user and group admin.
[root@clienta ~]# chown admin:admin /mnt/cephfsAs the admin user, create the ceph01 and ceph02 directories.
Create an empty file called firstfile on the ceph01 directory.
Ensure the directories and its contents are owned by the admin user.
Exit the root user session.
Create two directories directly underneath the mount point, and name them dir1 and dir2.
[root@clienta ~]#exitexit [admin@clienta ~]$mkdir /mnt/cephfs/ceph01[admin@clienta ~]$mkdir /mnt/cephfs/ceph02
Create an empty file called firstfile in the ceph01 directory.
[admin@clienta ~]$ touch /mnt/cephfs/ceph01/firstfileSet the ceph.dir.layout.stripe_count layout attribute to 2 for files created on the /mnt/cephfs/ceph01 directory.
Create a 10 MB file called secondfile on the /mnt/cephfs/ceph01 directory with the new layout attribute.
Verify the current layout of the /mnt/cephfs/ceph01 directory.
[admin@clienta ~]$ getfattr -n ceph.dir.layout /mnt/cephfs/ceph01
/mnt/cephfs/ceph01: ceph.dir.layout: No such attributeSet the ceph.dir.layout.stripe_count layout attribute to 2 for the /mnt/cephfs/ceph01 directory.
[admin@clienta ~]$ setfattr -n ceph.dir.layout.stripe_count -v 2 \
/mnt/cephfs/ceph01Create a 10 MB file called secondfile in the /mnt/cephfs/ceph01 directory.
[admin@clienta ~]$ dd if=/dev/zero of=/mnt/cephfs/ceph01/secondfile \
bs=1024 count=10000
10000+0 records in
10000+0 records out
10240000 bytes (10 MB, 9.8 MiB) copied, 0.0391504 s, 262 MB/sVerify that the secondfile file has the correct layout attribute.
[admin@clienta ~]$getfattr -n ceph.file.layout /mnt/cephfs/ceph01/secondfilegetfattr: Removing leading '/' from absolute path names # file: mnt/cephfs/ceph01/secondfile ceph.file.layout="stripe_unit=4194304stripe_count=2object_size=4194304 pool=cephfs.cl260-fs.meta"
Switch to the root user.
Install and use the ceph-fuse client to mount a new directory called cephfuse.
Switch to the root user.
Install the ceph-fuse package.
[admin@clienta ~]$sudo -i[root@clienta ~]#yum install -y ceph-fuse...output omitted...
Create a directory called /mnt/cephfuse to use as a mount point for the Fuse client.
Mount your new Ceph-Fuse file system on that directory.
[root@clienta ~]#mkdir /mnt/cephfuse[root@clienta ~]#ceph-fuse -m serverc: /mnt/cephfuse/2021-11-01T20:18:33.004-0400 7f14907ea200 -1 init, newargv = 0x5621fed5f550 newargc=15 ceph-fuse[48516]: starting ceph client ceph-fuse[48516]: starting fuse
View the contents of the /mnt directory.
[root@clienta ~]# tree /mnt
/mnt
|-- cephfs
| |-- ceph01
| | |-- firstfile
| | `-- secondfile
| `-- ceph02
`-- cephfuse
|-- ceph01
| |-- firstfile
| `-- secondfile
`-- ceph02
6 directories, 4 filesConfigure the CephFS file system to be persistently mounted at startup.
Use the contents of the /root/secretfile file to configure the mount operation in the /etc/fstab file.
Verify that the configuration works as expected by using the mount -a command.
View the contents of the admin key-ring in the /root/secretfile file.
[root@clienta ~]# cat /root/secretfile
AQA11VZhyq8VGRAAOus0I5xLWMSdAW/759e32A==Configure the /etc/fstab file to mount the file system at startup.
The /etc/fstab file should look like the following output.
[root@clienta ~]# cat /etc/fstab
...output omitted...
serverc:/ /mnt/cephfs ceph rw,seclabel,relatime,name=admin,secret=AQA11VZhyq8VGRAAOus0I5xLWMSdAW/759e32A==,acl 0 0Unmount the CephFS file system, then test mount using the mount -a command.
Verify the mount.
[root@clienta ~]#umount /mnt/cephfs[root@clienta ~]#mount -a[root@clienta ~]#df /mnt/cephfs/Filesystem 1K-blocks Used Available Use% Mounted on 172.25.250.12:/ 29773824 12288 29761536 1% /mnt/cephfs
Return to workstation as the student user.
[root@clienta ~]#exit[admin@clienta ~]$exit[student@workstation ~]$
This concludes the lab.