Bookmark this page

Lab: Creating Object Storage Cluster Components

In this lab, you will create and manage cluster components and authentication.

Outcomes

You should be able to create and configure BlueStore OSDs and pools, and set up authentication to the cluster.

As the student user on the workstation machine, use the lab command to prepare your system for this lab.

[student@workstation ~]$ lab start component-review

This command confirms that the hosts required for this exercise are accessible.

Procedure 4.4. Instructions

  1. Log in to clienta as the admin user. Create a new OSD daemon by using the /dev/vde device on serverc. View the details of the OSD. Restart the OSD daemon and verify it starts correctly.

    1. Log in to serverc as the admin user and use sudo to run the cephadm shell.

      [student@workstation ~]$ ssh admin@clienta
      [admin@clienta ~]$ sudo cephadm shell
      [ceph: root@clienta /]#
    2. Create a new OSD daemon by using the /dev/vde device on serverc.

      [ceph: root@clienta /]# ceph orch daemon add osd serverc.lab.example.com:/dev/vde
      Created osd(s) 9 on host 'serverc.lab.example.com'
    3. View the details of the OSD. The OSD ID might be different in your lab environment.

      [ceph: root@clienta /]# ceph osd find 9
      {
          "osd": 0,
          "addrs": {
              "addrvec": [
                  {
                      "type": "v2",
                      "addr": "172.25.250.12:6816",
                      "nonce": 2214147187
                  },
                  {
                      "type": "v1",
                      "addr": "172.25.250.12:6817",
                      "nonce": 2214147187
                  }
              ]
          },
          "osd_fsid": "eae3b333-24f3-46fb-83a5-b1de2559166b",
          "host": "serverc.lab.example.com",
          "crush_location": {
              "host": "serverc",
              "root": "default"
          }
      }
    4. Restart the OSD daemon.

      [ceph: root@clienta /]# ceph orch daemon restart osd.9
      Scheduled to restart osd.9 on host 'serverc.lab.example.com'
    5. Verify that the OSD starts correctly.

      [ceph: root@clienta /]# ceph orch ps
      ...output omitted...
      osd.9                             serverc.lab.example.com  running (13m)  2m ago     21m  -              16.2.0-117.el8cp  2142b60d7974  d4773b95c856
      ...output omitted...
  2. Create a replicated pool called labpool1 with 64 PGs. Set the number of replicas to 3. Set the application type to rbd. Set the pg_auto_scale mode to on for the pool.

    1. Create a replicated pool called labpool1 with 64 PGs.

      [ceph: root@clienta /]# ceph osd pool create labpool1 64 64 replicated
      pool 'labpool1' created
    2. Set the number of replicas to 3. The pool ID might be different in your lab environment.

      [ceph: root@clienta /]# ceph osd pool set labpool1 size 3
      set pool 6 size to 3
    3. Set the application type to rbd.

      [ceph: root@clienta /]# ceph osd pool application enable labpool1 rbd
      enabled application 'rbd' on pool 'labpool1'
    4. Set the pg_auto_scale mode to on for the pool.

      [ceph: root@clienta /]# ceph osd pool set labpool1 pg_autoscale_mode on
      set pool 6 pg_autoscale_mode to on
  3. Create an erasure code profile called k8m4 with data chunks on 8 OSDs (k=8), able to sustain the loss of 4 OSDs (m=4), and set crush-failure-domain=rack. Create an erasure coded pool called labpool2 with 64 PGs that uses the k8m4 profile.

    1. Create an erasure code profile called k8m4 with data chunks on 8 OSDs (k=8), able to sustain the loss of 4 OSDs (m=4), and set crush-failure-domain=rack.

      [ceph: root@clienta /]# ceph osd erasure-code-profile set k8m4 k=8 m=4 \
      crush-failure-domain=rack
      [ceph: root@clienta /]#
    2. Create an erasure coded pool called labpool2 with 64 PGs that use the k8m4 profile.

      [ceph: root@clienta /]# ceph osd pool create labpool2 64 64 erasure k8m4
      pool 'labpool2' created
  4. Create the client.rwpool user account with the capabilities to read and write objects in the labpool1 pool. This user must not be able to access the labpool2 pool in any way.

    Create the client.rpool user account with the capability to only read objects with names containing an rgb_ prefix from the labpool1 pool.

    Store the key-ring files for these two accounts in the correct location on clienta.

    Store the /etc/profile file as the my_profile object in the labpool1 pool.

    1. Exit the cephadm shell, then interactively use cephadm shell to create the two accounts from the clienta host system. Create the client.rwpool user account with read and write access to the labpool1 pool.

      [ceph: root@clienta /]# exit
      exit
      [admin@clienta ~]$ sudo cephadm shell -- ceph auth get-or-create client.rwpool \
       mon 'allow r' osd 'allow rw pool=labpool1' | sudo tee \
       /etc/ceph/ceph.client.rwpool.keyring
      [client.rwpool]
      	key = AQAn7FNhDd5uORAAqZPIq7nU0yDWebk2EXukOw==

      Because you explicitly provide the pool=labpool1 argument, no other pool is accessible by the user. Therefore, the client.rwpool user cannot access the labpool2 pool, matching the requirements.

    2. Create the client.rpool user account with read access to objects with names containing an rgb_ prefix in the labpool1 pool. Note that there is no equals sign (=) between object_prefix and its value.

      [admin@clienta ~]$ sudo cephadm shell -- ceph auth get-or-create client.rpool \
       mon 'allow r' osd 'allow r pool=labpool1 object_prefix my_' | sudo tee \
       /etc/ceph/ceph.client.rpool.keyring
      [client.rpool]
      	key = AQAD7VNhV0oWIhAAFTR+F3zuY3087n1OaLELVA==
    3. Use sudo to run a new cephadm shell with a bind mount from the host. Use the rados command to store the /etc/profile file as the my_profile object in the labpool1 pool. Use the client.rwpool user account rather than the default client.admin account to test the access rights you defined for the user.

      [admin@clienta ~]$ sudo cephadm shell --mount /etc/ceph:/etc/ceph
      [ceph: root@clienta /]# rados --id rwpool -p labpool1 put my_profile /etc/profile
    4. Verify that the client.rpool user can retrieve the my_profile object from the labpool1 pool.

      [ceph: root@clienta /]# rados --id rpool -p labpool1 get \
      my_profile /tmp/profile.out
      [ceph: root@clienta /]# diff /etc/profile /tmp/profile.out
  5. Return to workstation as the student user.

    1. Return to workstation as the student user.

      [ceph: root@clienta /]# exit
      [admin@clienta ~]$ exit
      [student@workstation ~]$

Evaluation

Grade your work by running the lab grade component-review command from your workstation machine. Correct any reported failures and rerun the script until successful.

[student@workstation ~]$ lab grade component-review

Finish

On the workstation machine, use the lab command to complete this exercise. This is important to ensure that resources from previous exercises do not impact upcoming exercises.

[student@workstation ~]$ lab finish component-review

This concludes the lab.

Revision: cl260-5.0-29d2128