In this chapter, you learned:
Red Hat Ceph Storage 5 performance depends on the performance of the underlying storage, network, and operating system file system components.
Performance is improved by reducing latency, increasing IOPS, and increasing throughput. Tuning for one metric often adversely affects the performance of another. Your primary tuning metric must consider the expected workload behavior of your storage cluster.
Ceph implements a scale-out model architecture. Increasing the number of OSD nodes increases the overall performance. The greater the parallel access, the greater the load capacity.
The RADOS and RBD bench commands are used to stress and benchmark a Ceph cluster.
Controlling scrubbing, deep scrubbing, backfill, and recovery processes helps avoid cluster over-utilization.
Troubleshooting Ceph issues starts with determining which Ceph component is causing the issue.
Enabling logging for a failing Ceph subsystem provides diagnostic information about the issue.
The log debug level is used to increase the logging verbosity.
Click CREATE to build all of the virtual machines needed for the classroom lab environment. This may take several minutes to complete. Once created the environment can then be stopped and restarted to pause your experience.
If you DELETE your lab, you will remove all of the virtual machines in your classroom and lose all of your progress.