Bookmark this page

Lab: Providing File Storage with CephFS

In this lab, you provide file storage by using the kernel client and deploying a Ceph Metadata Server (MDS).

Outcomes

You should be able to deploy an MDS and use the kernel client to mount the CephFS file system.

  • The serverc, serverd, and servere nodes are an operational 3-node Ceph cluster. All three nodes operate as a MON, a MGR, and an OSD host with at least one colocated OSD.

  • The clienta node is set up as your admin node server and you use it to install the MDS on serverc.

As the student user on the workstation machine, use the lab command to prepare your system for this lab.

[student@workstation ~]$ lab start fileshare-review

Procedure 10.3. Instructions

  1. Log in to clienta as the admin user. Create the ceph_data and ceph_metadata pools for CephFS. Create the mycephfs CephFS file system. From clienta, deploy the MDS to serverc. Verify that the MDS is up and active. Verify that the ceph health is OK.

    1. Log in to clienta as the admin user and use sudo to run the cephadm shell.

      [student@workstation ~]$ ssh admin@clienta
      [admin@clienta ~]$ sudo cephadm shell
      [ceph: root@clienta /]#
    2. Create the two required CephFS pools. Name these pools cephfs_data and cephfs_metadata.

      [ceph: root@clienta /]# ceph osd pool create cephfs_data
      pool 'cephfs_data' created
      [ceph: root@clienta /]# ceph osd pool create cephfs_metadata
      pool 'cephfs_metadata' created
    3. Create the CephFS file system with the name mycephfs. Your pool numbers might differ in your lab environment.

      [ceph: root@clienta /]# ceph fs new mycephfs cephfs_metadata cephfs_data
      new fs with metadata pool 7 and data pool 6
    4. Deploy the MDS service on serverc.lab.example.com.

      [ceph: root@clienta /]# ceph orch apply mds mycephfs \
      --placement="1 serverc.lab.example.com"
      Scheduled mds.mycephfs update...
    5. Verify that the MDS service is active. It can take some time until the MDS service is shown.

      [ceph: root@clienta /]# ceph mds stat
      mycephfs:1 {0=mycephfs.serverc.mycctv=up:active}
    6. Verify that the cluster health is OK.

      [ceph: root@clienta /]# ceph status
        cluster:
          id:     ff97a876-1fd2-11ec-8258-52540000fa0c
          health: HEALTH_OK
      
        services:
          mon: 4 daemons, quorum serverc.lab.example.com,servere,serverd,clienta (age 2h)
          mgr: serverc.lab.example.com.btgxor(active, since 2h), standbys: clienta.soxncl, servere.fmyxwv, serverd.ufqxxk
          mds: 1/1 daemons up
          osd: 9 osds: 9 up (since 2h), 9 in (since 36h)
          rgw: 2 daemons active (2 hosts, 1 zones)
      
        data:
          volumes: 1/1 healthy
          pools:   7 pools, 169 pgs
          objects: 212 objects, 7.5 KiB
          usage:   162 MiB used, 90 GiB / 90 GiB avail
          pgs:     169 active+clean
      
        io:
          client:   1.1 KiB/s wr, 0 op/s rd, 3 op/s wr
  2. On the clienta node, create the /mnt/cephfs-review mount point and mount the CephFS file system as a kernel client.

    1. Exit the cephadm shell. Verify that the Ceph client key ring is present in the /etc/ceph folder on the client node.

      [ceph: root@clienta /]# exit
      exit
      [admin@clienta ~]$ sudo ls -l /etc/ceph
      total 12
      -rw-r--r--. 1 root root  63 Sep 27 16:42 ceph.client.admin.keyring
      -rw-r--r--. 1 root root 177 Sep 27 16:42 ceph.conf
      -rw-------. 1 root root  82 Sep 27 16:42 podman-auth.json
    2. Install the ceph-common package in the client node.

      [admin@clienta ~]$ sudo yum install -y ceph-common
      ...output omitted...
      Complete!
    3. Create the /mnt/cephfs-review mount point directory. Mount the new CephFS file system as a kernel client.

      [admin@clienta ~]$ sudo mkdir /mnt/cephfs-review
      [admin@clienta ~]$ sudo mount.ceph serverc.lab.example.com:/ /mnt/cephfs-review \
      -o name=admin
    4. Change the ownership of the top-level directory of the mounted file system to user and group admin.

      [admin@clienta ~]$ sudo chown admin:admin /mnt/cephfs-review
  3. Create a 10 MB test file called cephfs.test1. Verify that the created data is replicated across all three nodes by showing 30 MB in the cephfs_data pool.

    1. Use the dd command to create one 10 MB file, and then verify that it triples across the OSD nodes.

      [admin@clienta ~]$ dd if=/dev/zero of=/mnt/cephfs-review/cephfs.test1 \
      bs=1M count=10
      10+0 records in
      10+0 records out
      10485760 bytes (10 MB, 10 MiB) copied, 0.0291862 s, 359 MB/s
      [admin@clienta ~]$ sudo cephadm shell -- ceph fs status
      Inferring fsid ff97a876-1fd2-11ec-8258-52540000fa0c
      Inferring config /var/lib/ceph/ff97a876-1fd2-11ec-8258-52540000fa0c/mon.clienta/config
      Using recent ceph image registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:6306...47ff
      mycephfs - 1 clients
      ========
      RANK  STATE             MDS               ACTIVITY     DNS    INOS   DIRS   CAPS
       0    active  mycephfs.serverc.nsihbi  Reqs:    0 /s    11     14     12      2
            POOL         TYPE     USED  AVAIL
      cephfs_metadata  metadata   120k  28.4G
        cephfs_data      data    30.0M  28.4G
      MDS version: ceph version 16.2.0-117.el8cp (0e34bb74700060ebfaa22d99b7d2cdc037b28a57) pacific (stable)
  4. Return to workstation as the student user.

    [admin@clienta ~]$ exit
    [student@workstation ~]$

Evaluation

Grade your work by running the lab grade fileshare-review command from your workstation machine. Correct any reported failures and rerun the script until successful.

[student@workstation ~]$ lab grade fileshare-review

Finish

On the workstation machine, use the lab command to complete this exercise. This is important to ensure that resources from previous exercises do not impact upcoming exercises.

[student@workstation ~]$ lab finish fileshare-review

This concludes the lab.

Revision: cl260-5.0-29d2128