Bookmark this page

Guided Exercise: Creating and Configuring Pools

In this exercise, you will create and configure replicated and erasure coded storage pools.

Outcomes

You should be able to create, delete, and rename pools as well as view and configure pool settings.

As the student user on the workstation machine, use the lab command to prepare your system for this exercise.

[student@workstation ~]$ lab start component-pool

This command confirms that the hosts required for this exercise are accessible.

Procedure 4.2. Instructions

  1. Log in to clienta as the admin user and use sudo to run the cephadm shell.

    [student@workstation ~]$ ssh admin@clienta
    [admin@clienta ~]$ sudo cephadm shell
    [ceph: root@clienta /]#
  2. Create a replicated pool called replpool1 with 64 Placement Groups (PGs).

    [ceph: root@clienta /]# ceph osd pool create replpool1 64 64
    pool 'replpool1' created
  3. Verify that PG autoscaling is enabled for the replpool1 pool and that it is the default for new pools.

    [ceph: root@clienta /]# ceph osd pool get replpool1 pg_autoscale_mode
    pg_autoscale_mode: on
    [ceph: root@clienta /]# ceph config get mon osd_pool_default_pg_autoscale_mode
    on
  4. List the pools, verify the existence of the replpool pool, and view the autoscale status for the pools.

    1. List the pools and verify the existence of the replpool pool.

      [ceph: root@clienta /]# ceph osd lspools
      1 device_health_metrics
      2 .rgw.root
      3 default.rgw.log
      4 default.rgw.control
      5 default.rgw.meta
      6 replpool1
    2. View the autoscale status.

      [ceph: root@clienta /]# ceph osd pool autoscale-status
      POOL                     SIZE ... AUTOSCALE
      device_health_metrics       0           on
      .rgw.root                2466           on
      default.rgw.control         0           on
      default.rgw.meta          393           on
      default.rgw.log          3520           on
      replpool1                   0           on
  5. Set the number of replicas for the replpool1 pool to 4. Set the minimum number of replicas required for I/O to 2, allowing up to two OSDs to fail without losing data. Set the application type for the pool to rbd. Use the ceph osd pool ls detail command to verify the pool configuration settings. Use the ceph osd pool get command to get the value of a specific setting.

    1. Set the number of replicas for the replpool1 pool to 4. Set the minimum number of replicas required for I/O to two.

      [ceph: root@clienta /]# ceph osd pool set replpool1 size 4
      set pool 6 size to 4
      [ceph: root@clienta /]# ceph osd pool set replpool1 min_size 2
      set pool 6 min_size to 2
    2. Set the application type for the pool to rbd.

      [ceph: root@clienta /]# ceph osd pool application enable replpool1 rbd
      enabled application 'rbd' on pool 'replpool1'
    3. Use the ceph osd pool ls detail command to verify the pool configuration settings.

      [ceph: root@clienta /]# ceph osd pool ls detail
      ...output omitted...
      pool 6 'replpool1' replicated size 4 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode on last_change 366 flags hashpspool stripe_width 0 application rbd
      [ceph: root@clienta /]# ceph osd pool get replpool1 size
      size: 4

      Note

      The pool uses CRUSH rule 0. Configuring CRUSH rules and pool CRUSH rules is covered in a later chapter.

  6. Rename the replpool1 pool to newpool. Delete the newpool pool.

    1. Rename the replpool1 pool to newpool.

      [ceph: root@clienta /]# ceph osd pool rename replpool1 newpool
      pool 'replpool1' renamed to 'newpool'
    2. Delete the newpool pool.

      [ceph: root@clienta /]# ceph osd pool delete newpool
      Error EPERM: WARNING: this will *PERMANENTLY DESTROY* all data stored in pool newpool.  If you are *ABSOLUTELY CERTAIN* that is what you want, pass the pool name *twice*, followed by --yes-i-really-really-mean-it.
      
      [ceph: root@clienta /]# ceph osd pool delete newpool newpool \
      --yes-i-really-really-mean-it
      Error EPERM: pool deletion is disabled; you must first set the mon_allow_pool_delete config option to true before you can destroy a pool
      
      [ceph: root@clienta /]# ceph tell mon.* config set mon_allow_pool_delete true
      mon.serverc.lab.example.com: {
          "success": ""
      }
      mon.serverd: {
          "success": ""
      }
      mon.servere: {
          "success": ""
      }
      mon.clienta: {
          "success": ""
      }
      
      [ceph: root@clienta /]# ceph osd pool delete newpool newpool \
      --yes-i-really-really-mean-it
      pool 'newpool' removed

      Important

      When you rename a pool, you must update any associated user authentication settings with the new pool name. User authentication and capabilities are covered in a later chapter.

  7. List the existing erasure coded profiles and view the details of the default profile. Create an erasure code profile called ecprofile-k4-m2 with k=4 and m=2 values. These values allow the simultaneous loss of two OSDs without losing any data and meets the minimum requirement for Red Hat support.

    1. View the configured erasure coded profiles.

      [ceph: root@clienta /]# ceph osd erasure-code-profile ls
      default
    2. View the details of the default profile.

      [ceph: root@clienta /]# ceph osd erasure-code-profile get default
      k=2
      m=1
      plugin=jerasure
      technique=reed_sol_van
    3. Create an erasure code profile called ecprofile-k4-m2 with k=4 and m=2 values.

      [ceph: root@clienta /]# ceph osd erasure-code-profile set ecprofile-k4-m2 k=4 m=2
  8. Create an erasure coded pool called ecpool1 using the ecprofile-k4-m2 profile with 64 placement groups and an rgw application type. View the details of the ecpool1 pool. Configure the ecpool1 pool to allow partial overwrites so that RBD and CephFS can use it. Delete the ecpool1.

    1. Create an erasure coded pool called ecpool1 by using the ecprofile-k4-m2 profile with 64 placement groups and set the application type to rgw.

      [ceph: root@clienta /]# ceph osd pool create ecpool1 64 64 erasure ecprofile-k4-m2
      pool 'ecpool1' created
      [ceph: root@clienta /]# ceph osd pool application enable ecpool1 rgw
      enabled application 'rgw' on pool 'ecpool1'
    2. View the details of the ecpool1 pool. Your pool ID is expected to be different.

      [ceph: root@clienta /]# ceph osd pool ls detail
      ...output omitted...
      pool 7 'ecpool1' erasure profile ecprofile-k4-m2 size 6 min_size 5 crush_rule 2 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode on last_change 373 flags hashpspool,creating stripe_width 16384 application rgw
    3. Configure the ecpool1 pool to allow partial overwrites so that RBD and CephFS can use it.

      [ceph: root@clienta /]# ceph osd pool set ecpool1 allow_ec_overwrites true
          set pool 7 allow_ec_overwrites to true
    4. Delete the ecpool1 pool.

      [ceph: root@clienta /]# ceph osd pool delete ecpool1 ecpool1 \
      --yes-i-really-really-mean-it
      pool 'ecpool1' removed
  9. Return to workstation as the student user.

    [ceph: root@clienta /]# exit
    [admin@clienta ~]$ exit
    [student@workstation ~]$
  10. Create a replicated pool using the Ceph Dashboard.

    1. Open a web browser and navigate to https://serverc:8443.

    2. Log in as admin by using redhat as the password. You should see the Dashboard page.

      Figure 4.5: The Ceph Storage Dashboard
    3. Click Pools to display the Pools page Click Create.

      Figure 4.6: The Pools page
    4. Enter replpool1 in the Name field, replicated in the Pool type field, on in the PG Autoscale field, and 3 in the Replicated size. Leave other values as default. Click CreatePool.

      Figure 4.7: Creating a replicated pool in the Ceph Dashboard
      Figure 4.8: Creating a replicated pool in the Ceph Dashboard
  11. Create an erasure coded pool using the Ceph Dashboard.

    1. Click Pools to display the Pools page. Click Create.

      Figure 4.9: The Pools page
    2. Enter ecpool1 in the Name field, erasure in the Pool type field, off in the PG Autoscale field, and 64 in the Placement groups field. Check the EC Overwrites box of the Flags section, and select the ecprofile-k4-m2 profile from the Erasure code profile field. Click CreatePool.

      Figure 4.10: Creating an erasure coded pool in the Ceph Dashboard
      Figure 4.11: Creating an erasure coded pool in the Ceph Dashboard

Finish

On the workstation machine, use the lab command to complete this exercise. This is important to ensure that resources from previous exercises do not impact upcoming exercises.

[student@workstation ~]$ lab finish component-pool

This concludes the guided exercise.

Revision: cl260-5.0-29d2128