In this exercise, you will create and configure replicated and erasure coded storage pools.
Outcomes
You should be able to create, delete, and rename pools as well as view and configure pool settings.
As the student user on the workstation machine, use the lab command to prepare your system for this exercise.
[student@workstation ~]$ lab start component-pool
This command confirms that the hosts required for this exercise are accessible.
Procedure 4.2. Instructions
Log in to clienta as the admin user and use sudo to run the cephadm shell.
[student@workstation ~]$ssh admin@clienta[admin@clienta ~]$sudo cephadm shell[ceph: root@clienta /]#
Create a replicated pool called replpool1 with 64 Placement Groups (PGs).
[ceph: root@clienta /]# ceph osd pool create replpool1 64 64
pool 'replpool1' createdVerify that PG autoscaling is enabled for the replpool1 pool and that it is the default for new pools.
[ceph: root@clienta /]#ceph osd pool get replpool1 pg_autoscale_modepg_autoscale_mode: on [ceph: root@clienta /]#ceph config get mon osd_pool_default_pg_autoscale_modeon
List the pools, verify the existence of the replpool pool, and view the autoscale status for the pools.
List the pools and verify the existence of the replpool pool.
[ceph: root@clienta /]#ceph osd lspools1 device_health_metrics 2 .rgw.root 3 default.rgw.log 4 default.rgw.control 5 default.rgw.meta6 replpool1
View the autoscale status.
[ceph: root@clienta /]# ceph osd pool autoscale-status
POOL SIZE ... AUTOSCALE
device_health_metrics 0 on
.rgw.root 2466 on
default.rgw.control 0 on
default.rgw.meta 393 on
default.rgw.log 3520 on
replpool1 0 onSet the number of replicas for the replpool1 pool to 4.
Set the minimum number of replicas required for I/O to 2, allowing up to two OSDs to fail without losing data.
Set the application type for the pool to rbd.
Use the ceph osd pool ls detail command to verify the pool configuration settings.
Use the ceph osd pool get command to get the value of a specific setting.
Set the number of replicas for the replpool1 pool to 4.
Set the minimum number of replicas required for I/O to two.
[ceph: root@clienta /]#ceph osd pool set replpool1 size 4set pool 6 size to 4 [ceph: root@clienta /]#ceph osd pool set replpool1 min_size 2set pool 6 min_size to 2
Set the application type for the pool to rbd.
[ceph: root@clienta /]# ceph osd pool application enable replpool1 rbd
enabled application 'rbd' on pool 'replpool1'Use the ceph osd pool ls detail command to verify the pool configuration settings.
[ceph: root@clienta /]#ceph osd pool ls detail...output omitted... pool 6 'replpool1' replicated size 4 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode on last_change 366 flags hashpspool stripe_width 0 application rbd [ceph: root@clienta /]#ceph osd pool get replpool1 sizesize: 4
The pool uses CRUSH rule 0. Configuring CRUSH rules and pool CRUSH rules is covered in a later chapter.
Rename the replpool1 pool to newpool.
Delete the newpool pool.
Rename the replpool1 pool to newpool.
[ceph: root@clienta /]# ceph osd pool rename replpool1 newpool
pool 'replpool1' renamed to 'newpool'Delete the newpool pool.
[ceph: root@clienta /]#ceph osd pool delete newpoolError EPERM: WARNING: this will *PERMANENTLY DESTROY* all data stored in pool newpool. If you are *ABSOLUTELY CERTAIN* that is what you want, pass the pool name *twice*, followed by --yes-i-really-really-mean-it. [ceph: root@clienta /]#ceph osd pool delete newpool newpool \ --yes-i-really-really-mean-itError EPERM: pool deletion is disabled; you must first set the mon_allow_pool_delete config option to true before you can destroy a pool [ceph: root@clienta /]#ceph tell mon.* config set mon_allow_pool_delete truemon.serverc.lab.example.com: { "success": "" } mon.serverd: { "success": "" } mon.servere: { "success": "" } mon.clienta: { "success": "" } [ceph: root@clienta /]#ceph osd pool delete newpool newpool \ --yes-i-really-really-mean-itpool 'newpool' removed
When you rename a pool, you must update any associated user authentication settings with the new pool name. User authentication and capabilities are covered in a later chapter.
List the existing erasure coded profiles and view the details of the default profile.
Create an erasure code profile called ecprofile-k4-m2 with k=4 and m=2 values.
These values allow the simultaneous loss of two OSDs without losing any data and meets the minimum requirement for Red Hat support.
View the configured erasure coded profiles.
[ceph: root@clienta /]# ceph osd erasure-code-profile ls
defaultView the details of the default profile.
[ceph: root@clienta /]# ceph osd erasure-code-profile get default
k=2
m=1
plugin=jerasure
technique=reed_sol_vanCreate an erasure code profile called ecprofile-k4-m2 with k=4 and m=2 values.
[ceph: root@clienta /]# ceph osd erasure-code-profile set ecprofile-k4-m2 k=4 m=2Create an erasure coded pool called ecpool1 using the ecprofile-k4-m2 profile with 64 placement groups and an rgw application type.
View the details of the ecpool1 pool.
Configure the ecpool1 pool to allow partial overwrites so that RBD and CephFS can use it.
Delete the ecpool1.
Create an erasure coded pool called ecpool1 by using the ecprofile-k4-m2 profile with 64 placement groups and set the application type to rgw.
[ceph: root@clienta /]#ceph osd pool create ecpool1 64 64 erasure ecprofile-k4-m2pool 'ecpool1' created [ceph: root@clienta /]#ceph osd pool application enable ecpool1 rgwenabled application 'rgw' on pool 'ecpool1'
View the details of the ecpool1 pool.
Your pool ID is expected to be different.
[ceph: root@clienta /]# ceph osd pool ls detail
...output omitted...
pool 7 'ecpool1' erasure profile ecprofile-k4-m2 size 6 min_size 5 crush_rule 2 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode on last_change 373 flags hashpspool,creating stripe_width 16384 application rgwConfigure the ecpool1 pool to allow partial overwrites so that RBD and CephFS can use it.
[ceph: root@clienta /]# ceph osd pool set ecpool1 allow_ec_overwrites true
set pool 7 allow_ec_overwrites to trueDelete the ecpool1 pool.
[ceph: root@clienta /]# ceph osd pool delete ecpool1 ecpool1 \
--yes-i-really-really-mean-it
pool 'ecpool1' removedReturn to workstation as the student user.
[ceph: root@clienta /]#exit[admin@clienta ~]$exit[student@workstation ~]$
Create a replicated pool using the Ceph Dashboard.
Open a web browser and navigate to https://serverc:8443.
Log in as admin by using redhat as the password.
You should see the page.
Click to display the Pools page
Click .
Enter replpool1 in the field, replicated in the field, on in the field, and 3 in the .
Leave other values as default.
Click .
Create an erasure coded pool using the Ceph Dashboard.
Click to display the Pools page.
Click .
Enter ecpool1 in the field, erasure in the field, off in the field, and 64 in the field.
Check the box of the Flags section, and select the ecprofile-k4-m2 profile from the field.
Click .
This concludes the guided exercise.