In this chapter, you learned:
The following services provide the foundation for a Ceph storage cluster:
Monitors (MONs) maintain cluster maps.
Object Storage Devices (OSDs) store and manage objects.
Managers (MGRs) track and expose cluster runtime metrics.
Metadata Servers (MDSes) store metadata that CephFS uses to efficiently run POSIX commands for clients.
RADOS (Reliable Autonomic Distributed Object Store) is the back end for storage in the Ceph cluster, a self-healing and self-managing object store.
RADOS provides four access methods to storage: the librados native API, the object-based RADOS Gateway, the RADOS Block Device (RBD), and the distributed file-based CephFS file system.
A Placement Group (PG) aggregates a set of objects into a hash bucket. The CRUSH algorithm maps the hash buckets to a set of OSDs for storage.
Pools are logical partitions of the Ceph storage that are used to store object data. Each pool is a name tag for grouping objects. A pool groups objects for storage by using placement groups.
Red Hat Ceph Storage provides two interfaces, a command line and a Dashboard GUI, for managing clusters. Both interfaces use the same cephadm module to perform operations and to interact with cluster services.
Click CREATE to build all of the virtual machines needed for the classroom lab environment. This may take several minutes to complete. Once created the environment can then be stopped and restarted to pause your experience.
If you DELETE your lab, you will remove all of the virtual machines in your classroom and lose all of your progress.