Bookmark this page

Lab: Deploying CephFS

In this review, you will deploy CephFS on an existing Red Hat Ceph Storage cluster using specified requirements.

Outcomes

You should be able to deploy a Metadata Server, provide storage with CephFS, and configure clients for its use.

If you did not reset your classroom virtual machines at the end of the last chapter, save any work you want to keep from earlier exercises on those machines and reset the classroom environment now.

Important

Reset your environment before performing this exercise. All comprehensive review labs start with a clean, initial classroom environment that includes a pre-built, fully operational Ceph cluster. All remaining comprehensive reviews use the default Ceph cluster provided in the initial classroom environment.

As the student user on the workstation machine, use the lab command to prepare your system for this exercise.

[student@workstation ~]$ lab start comprehensive-review3

This command ensures that all cluster hosts are reachable.

Specifications

  • Create a CephFS file system cl260-fs. Create an MDS service called cl260-fs with two MDS instances, one on the serverc node and another on the serverd node. Create a data pool called cephfs.cl260-fs.data and a metadata pool called cephfs.cl260-fs.meta. Use replicated as the type for both pools.

  • Mount the CephFS file system to the /mnt/cephfs directory on the clienta host and owned by the admin user. Save the client.admin key-ring to the /root/secretfile and use the file to authenticate the mount operation.

  • Create the ceph01 and ceph02 directories. Create an empty file called firstfile in the ceph01 directory. Verify the directories and its contents are owned by the admin user.

  • Modify the ceph.dir.layout.stripe_count layout attribute for the /mnt/cephfs/dir1 directory. Verify that new files created with the directory inherit the attribute.

  • Use the ceph-fuse client to mount a new directory called /mnt/cephfuse.

  • Configure the CephFS file system to be mounted on each system startup. Verify that the /etc/fstab file is updated accordingly.

  1. Create a CephFS file system cl260-fs. Create an MDS service called cl260-fs with an MDS instance on serverc and another on serverd. Create a data pool called cephfs.cl260-fs.data and a metadata pool called cephfs.cl260-fs.meta. Use replicated as the type for both pools. Verify that the MDS service is up and running.

    1. Log in to clienta and use sudo to run the cephadm shell.

      [student@workstation ~]$ ssh admin@clienta
      [admin@clienta ~]$ sudo cephadm shell
      [ceph: root@clienta /]#
    2. Create a data pool called cephfs.cl260-fs.data and a metadata pool called cephfs.cl260-fs.meta for the CephFS service.

      [ceph: root@clienta /]# ceph osd pool create cephfs.cl260-fs.data
      pool 'cephfs.cl260-fs.data' created
      [ceph: root@clienta /]# ceph osd pool create cephfs.cl260-fs.meta
      pool 'cephfs.cl260-fs.meta' created
      [ceph: root@clienta /]# ceph osd pool ls
      ...output omitted...
      cephfs.cl260-fs.data
      cephfs.cl260-fs.meta
    3. Create a CephFS file system called cl260-fs.

      [ceph: root@clienta /]# ceph fs new cl260-fs cephfs.cl260-fs.data \
      cephfs.cl260-fs.meta
      new fs with metadata pool 14 and data pool 15
      [ceph: root@clienta /]# ceph fs ls
      name: cl260-fs, metadata pool: cephfs.cl260-fs.data, data pools: [cephfs.cl260-fs.meta ]
    4. Create an MDS service called cl260-fs with an MDS instance on serverc.

      [ceph: root@clienta /]# ceph orch apply mds cl260-fs \
        --placement="2 serverc.lab.example.com serverd.lab.example.com"
      Scheduled mds.cl260-fs update...
      [ceph: root@clienta /]# ceph orch ps --daemon-type mds
      NAME                         HOST                     STATUS         REFRESHED ...
      mds.cl260-fs.serverc.iuwwzt  serverc.lab.example.com  running (53s)  46s ago   ...
      mds.cl260-fs.serverd.lapeyj  serverd.lab.example.com  running (50s)  47s ago   ...
    5. Verify that the MDS service is up and running.

      [ceph: root@clienta /]# ceph mds stat
      cl260-fs:1 {0=cl260-fs.serverd.lapeyj=up:active} 1 up:standby
      [ceph: root@clienta /]# ceph status
        cluster:
          id:     2ae6d05a-229a-11ec-925e-52540000fa0c
          health: HEALTH_OK
      
        services:
      ...output omitted...
          mds: 1/1 daemons up, 1 standby
      ...output omitted...
  2. Install the ceph-common package. Mount the CephFS file system to the /mnt/cephfs directory on the clienta host. Save the key-ring associated with the client.admin user to the /root/secretfile file. Use this file to authenticate the mount operation. Verify that the /mnt/cephfs directory is owned by the admin user.

    1. Exit the cephadm shell and switch to the root user. Install the ceph-common package.

      [ceph: root@clienta /]# exit
      exit
      [admin@clienta ~]$ sudo -i
      [root@clienta ~]# yum install -y ceph-common
      ...output omitted...
    2. Extract the key-ring associated with the client.admin user, and save it in the /root/secretfile file.

      [root@clienta ~]# ceph auth get-key client.admin | tee /root/secretfile
      ...output omitted...
    3. Create a new directory called /mnt/cephfs to use as a mount point for the CephFS file system. Mount your new CephFS file system on that directory.

      [root@clienta ~]# mkdir /mnt/cephfs
      [root@clienta ~]# mount -t ceph serverc:/ /mnt/cephfs \
        -o name=admin,secretfile=/root/secretfile
    4. Verify the mount.

      [root@clienta ~]# df /mnt/cephfs
      Filesystem      1K-blocks  Used Available Use% Mounted on
      172.25.250.12:/  29790208     0  29790208   0% /mnt/cephfs
    5. Change the ownership of the top-level directory of the mounted file system to user and group admin.

      [root@clienta ~]# chown admin:admin /mnt/cephfs
  3. As the admin user, create the ceph01 and ceph02 directories. Create an empty file called firstfile on the ceph01 directory. Ensure the directories and its contents are owned by the admin user.

    1. Exit the root user session. Create two directories directly underneath the mount point, and name them dir1 and dir2.

      [root@clienta ~]# exit
      exit
      [admin@clienta ~]$ mkdir /mnt/cephfs/ceph01
      [admin@clienta ~]$ mkdir /mnt/cephfs/ceph02
    2. Create an empty file called firstfile in the ceph01 directory.

      [admin@clienta ~]$ touch /mnt/cephfs/ceph01/firstfile
  4. Set the ceph.dir.layout.stripe_count layout attribute to 2 for files created on the /mnt/cephfs/ceph01 directory. Create a 10 MB file called secondfile on the /mnt/cephfs/ceph01 directory with the new layout attribute.

    1. Verify the current layout of the /mnt/cephfs/ceph01 directory.

      [admin@clienta ~]$ getfattr -n ceph.dir.layout /mnt/cephfs/ceph01
      /mnt/cephfs/ceph01: ceph.dir.layout: No such attribute
    2. Set the ceph.dir.layout.stripe_count layout attribute to 2 for the /mnt/cephfs/ceph01 directory.

      [admin@clienta ~]$ setfattr -n ceph.dir.layout.stripe_count -v 2 \
        /mnt/cephfs/ceph01
    3. Create a 10 MB file called secondfile in the /mnt/cephfs/ceph01 directory.

      [admin@clienta ~]$ dd if=/dev/zero of=/mnt/cephfs/ceph01/secondfile \
        bs=1024 count=10000
      10000+0 records in
      10000+0 records out
      10240000 bytes (10 MB, 9.8 MiB) copied, 0.0391504 s, 262 MB/s
    4. Verify that the secondfile file has the correct layout attribute.

      [admin@clienta ~]$ getfattr -n ceph.file.layout /mnt/cephfs/ceph01/secondfile
      getfattr: Removing leading '/' from absolute path names
      # file: mnt/cephfs/ceph01/secondfile
      ceph.file.layout="stripe_unit=4194304 stripe_count=2 object_size=4194304 pool=cephfs.cl260-fs.meta"
  5. Switch to the root user. Install and use the ceph-fuse client to mount a new directory called cephfuse.

    1. Switch to the root user. Install the ceph-fuse package.

      [admin@clienta ~]$ sudo -i
      [root@clienta ~]# yum install -y ceph-fuse
      ...output omitted...
    2. Create a directory called /mnt/cephfuse to use as a mount point for the Fuse client. Mount your new Ceph-Fuse file system on that directory.

      [root@clienta ~]# mkdir /mnt/cephfuse
      [root@clienta ~]# ceph-fuse -m serverc: /mnt/cephfuse/
      2021-11-01T20:18:33.004-0400 7f14907ea200 -1 init, newargv = 0x5621fed5f550 newargc=15
      ceph-fuse[48516]: starting ceph client
      ceph-fuse[48516]: starting fuse
    3. View the contents of the /mnt directory.

      [root@clienta ~]# tree /mnt
      /mnt
      |-- cephfs
      |   |-- ceph01
      |   |   |-- firstfile
      |   |   `-- secondfile
      |   `-- ceph02
      `-- cephfuse
          |-- ceph01
          |   |-- firstfile
          |   `-- secondfile
          `-- ceph02
      
      6 directories, 4 files
  6. Configure the CephFS file system to be persistently mounted at startup. Use the contents of the /root/secretfile file to configure the mount operation in the /etc/fstab file. Verify that the configuration works as expected by using the mount -a command.

    1. View the contents of the admin key-ring in the /root/secretfile file.

      [root@clienta ~]# cat /root/secretfile
      AQA11VZhyq8VGRAAOus0I5xLWMSdAW/759e32A==
    2. Configure the /etc/fstab file to mount the file system at startup. The /etc/fstab file should look like the following output.

      [root@clienta ~]# cat /etc/fstab
      ...output omitted...
      serverc:/ /mnt/cephfs ceph rw,seclabel,relatime,name=admin,secret=AQA11VZhyq8VGRAAOus0I5xLWMSdAW/759e32A==,acl 0 0
    3. Unmount the CephFS file system, then test mount using the mount -a command. Verify the mount.

      [root@clienta ~]# umount /mnt/cephfs
      [root@clienta ~]# mount -a
      [root@clienta ~]# df /mnt/cephfs/
      Filesystem      1K-blocks  Used Available Use% Mounted on
      172.25.250.12:/  29773824 12288  29761536   1% /mnt/cephfs
    4. Return to workstation as the student user.

      [root@clienta ~]# exit
      [admin@clienta ~]$ exit
      [student@workstation ~]$

Evaluation

Grade your work by running the lab grade comprehensive-review3 command from your workstation machine. Correct any reported failures and rerun the script until successful.

[student@workstation ~]$ lab grade comprehensive-review3

Finish

As the student user on the workstation machine, use the lab command to complete this exercise. This is important to ensure that resources from previous exercises do not impact upcoming exercises.

[student@workstation ~]$ lab finish comprehensive-review3

This concludes the lab.

Revision: cl260-5.0-29d2128