Bookmark this page

Summary

In this chapter, you learned:

  • The following services provide the foundation for a Ceph storage cluster:

    • Monitors (MONs) maintain cluster maps.

    • Object Storage Devices (OSDs) store and manage objects.

    • Managers (MGRs) track and expose cluster runtime metrics.

    • Metadata Servers (MDSes) store metadata that CephFS uses to efficiently run POSIX commands for clients.

  • RADOS (Reliable Autonomic Distributed Object Store) is the back end for storage in the Ceph cluster, a self-healing and self-managing object store.

  • RADOS provides four access methods to storage: the librados native API, the object-based RADOS Gateway, the RADOS Block Device (RBD), and the distributed file-based CephFS file system.

  • A Placement Group (PG) aggregates a set of objects into a hash bucket. The CRUSH algorithm maps the hash buckets to a set of OSDs for storage.

  • Pools are logical partitions of the Ceph storage that are used to store object data. Each pool is a name tag for grouping objects. A pool groups objects for storage by using placement groups.

  • Red Hat Ceph Storage provides two interfaces, a command line and a Dashboard GUI, for managing clusters. Both interfaces use the same cephadm module to perform operations and to interact with cluster services.

Revision: cl260-5.0-29d2128