Bookmark this page

Summary

In this chapter, you learned:

  • BlueStore is the default storage back end for Red Hat Ceph Storage 5. It stores objects directly on raw block devices and improves performance over the previous FileStore back end.

  • BlueStore OSDs use a RocksDB key-value database to manage metadata and store it on a BlueFS partition. Red Hat Ceph Storage 5 uses sharding by default for new OSDs.

  • Block.db stores object metadata and the write-ahead log (WAL) stores journals. You can improve OSD performance by placing the block.db and WAL devices on faster storage than the object data.

  • You can provision OSDs by using service specification files, by choosing a specific host and device, or automatically with the orchestrator service.

  • Pools are logical partitions for storing objects. The available pool types are replicated and erasure coded.

  • Replicated pools are the default type of pool, they copy each object to multiple OSDs.

  • Erasure coded pools function by dividing object data into chunks (k), calculating coding chunks (m) based on the data chunks, then storing each chunk on separate OSDs. The coding chunks are used to reconstruct object data if an OSD fails.

  • A pool namespace allows you to logically partition a pool and is useful for restricting storage access by an application.

  • The cephx protocol authenticates clients and authorizes communication between clients, applications, and daemons in the cluster. It is based on shared secret keys.

  • Clients can access the cluster when they are configured with a user account name and a key-ring file containing the user's secret key.

  • Cephx capabilities provide a way to control access to pools and object data within pools.

Revision: cl260-5.0-29d2128