Bookmark this page

Manage Layered Storage

Objectives

Analyze the multiple storage components that make up the layers of the storage stack.

Storage Stack

Storage in RHEL is composed of multiple layers of drivers, managers, and utilities that are mature, stable, and full of modern features. Managing storage requires familiarity with stack components, and recognizing that storage configuration affects the boot process, application performance, and the ability to provide needed storage features for specific application use cases.

Figure 8.2: Storage stack

Previous sections in the Red Hat System Administration courses presented XFS file systems, network storage sharing, partitioning, and the Logical Volume Manager. This section shows the bottom-to-top RHEL storage stack and introduces each layer.

This section also covers Stratis, the daemon that unifies, configures, and monitors the underlying RHEL storage stack components, and provides automated local storage management from either the CLI or the RHEL web console.

Block Device

Block devices are at the bottom of the storage stack, and present a stable, consistent device protocol that enables including almost any block device transparently in a RHEL storage configuration. Most block devices today are accessed through the RHEL SCSI device driver, and appear as a SCSI device, including earlier ATA hard drives, solid-state devices, and common enterprise host bus adapters (HBAs). RHEL also supports iSCSI, Fibre Channel over Ethernet (FCoE), virtual machine driver (virtio), serial-attached SCSI (SAS), Non-Volatile Memory Express (NVMe), and other block devices.

An iSCSI target can be a dedicated physical device in a network or an iSCSI software-configured logical device on a networked storage server. The target is the endpoint in a SCSI protocol bus communication, to access the storage as Logical Unit Numbers (LUNs).

The Fibre Channel over Ethernet (FCoE) protocol transmits Fibre Channel frames over Ethernet networks. Typically, each data center has dedicated LAN and Storage Area Network (SAN) cabling, which is uniquely configured for its traffic. With FCoE, both traffic types can be combined into a larger, converged, Ethernet network architecture. FCoE benefits include lower hardware and energy costs.

Multipath

A path is a connection between a server and the underlying storage. Device Mapper multipath (dm-multipath) is a RHEL native multipath tool for configuring redundant I/O paths into a single, path-aggregated logical device. A logical device that is created by using the device mapper (dm) appears as a unique block device in the /dev/mapper/ directory for each LUN that is attached to the system.

You can also implement storage multipath redundancy by using network bonding when the storage, such as iSCSI and FCoE, uses network cabling.

Partitions

A block device can be further divided into partitions. Partitions might consume the entire block device size, or divide the block device for creating multiple partitions. You can use these partitions to create a file system, LVM devices, or directly for database structures or other raw storage.

RAID

A Redundant Array of Inexpensive Disks (RAID) is a storage virtualization technology that creates large logical volumes from multiple physical or virtual block device components. Different forms of RAID volumes offer data redundancy, performance improvement, or both, by implementing mirroring or striping layouts.

LVM supports RAID levels 0, 1, 4, 5, 6, and 10. RAID logical volumes that LVM creates and manages use the Multiple Devices (mdadm) kernel drivers. When not using LVM, Device Mapper RAID (dm-raid) provides a device mapper interface to the mdadm kernel drivers.

Logical Volume Manager

LVM physical volumes, volume groups, and logical volumes were discussed in a previous section. LVM can take almost any form of physical or virtual block devices, and can build storage as new logical storage volumes, and effectively hides the physical storage configuration from applications and other storage clients.

You can stack LVM volumes and implement advanced features such as encryption and compression for each part of the stack. The stack LVM volumes have mandated rules and recommended practices to follow for practical layering for specific scenarios. You can use case-specific recommendations from the Configuring and Managing Logical Volumes user guide.

LVM supports LUKS encryption, where a lower block device or partition is encrypted and presented as a secure volume to create a file system on top. The practical advantage for LUKS over file-system or file-based encryption is that a LUKS-encrypted device does not allow public visibility or access to the file-system structure. The LUKS-encrypted device ensures that a physical device remains secure even when removed from a computer.

LVM now incorporates VDO deduplication and compression as a configurable feature of regular logical volumes. You can use LUKS encryption and VDO together with logical volumes, where the LVM LUKS encryption is enabled underneath the LVM VDO volume.

File System or Other Use

The top layer of the stack is typically a file system, and can be used as raw space for databases or custom application data requirements. RHEL supports multiple file-system types, and recommends XFS for most modern use cases. XFS is required when the utility that implements LVM is Red Hat Ceph Storage or the Stratis storage tool.

Database server applications consume storage in different ways, depending on their architecture and size. Some smaller databases store their structures in regular files that are contained in a file system. Because of the additional overhead or restrictions of file system access, this architecture has scaling limits. Larger databases that bypass file system caching, and that use their own caching mechanisms, create their database structures on raw storage. Logical volumes are suitable for database and other raw storage use cases.

Red Hat Ceph Storage creates its own storage management metadata structures on raw devices, to create Ceph Object Storage Devices (OSDs). In the latest Red Hat Ceph Storage versions, Ceph uses LVM to initialize disk devices for use as OSDs. More information is available in the Cloud Storage with Red Hat Ceph Storage (CL260) course.

Stratis Storage Management

Stratis is a local storage management tool that Red Hat and the upstream Fedora community developed. Stratis configures initial storage, changes a storage configuration, and uses advanced storage features.

Important

Stratis is currently available as a Technology Preview, but is expected to be supported in a later RHEL 9 version. For information about Red Hat scope of support for Technology Preview features, see the Technology Features Support Scope document.

Red Hat encourages customers to provide feedback when deploying Stratis.

Stratis runs as a service that manages pools of physical storage devices, and transparently creates and manages volumes for the newly created file systems.

Stratis builds file systems from shared pools of disk devices by using the thin provisioning concept. Instead of immediately allocating physical storage space to the file system when you create it, Stratis dynamically allocates that space from the pool as the file system stores more data. Therefore, the file system might appear to be 1 TiB, but might have only 100 GiB of real storage that is allocated to it from the pool.

You can create multiple pools from different storage devices. From each pool, you can create one or more file systems. Currently, you can create up to 224 file systems per pool.

Stratis builds the components that make up a Stratis-managed file system from standard Linux components. Internally, Stratis uses the Device Mapper infrastructure that LVM also uses. Stratis formats the managed file systems with XFS.

SectionFigure 8.3: Stratis architecture illustrates how Stratis assembles the elements of its storage management solution. Stratis assigns block storage devices such as hard disks or SSDs to pools. Each device contributes some physical storage to the pool. Then, Stratis creates file systems from the pools, and maps physical storage to each file system as needed.

Figure 8.3: Stratis architecture

Stratis Administration Methods

To manage file systems with the Stratis storage management solution, install the stratis-cli and stratisd packages. The stratis-cli package provides the stratis command, which sends reconfiguration requests to the stratisd system daemon. The stratisd package provides the stratisd service, which handles reconfiguration requests, and manages and monitors Stratis block devices, pools, and file systems.

Stratis administration is included in the RHEL web console.

Warning

Reconfigure file systems created by Stratis only with Stratis tools and commands.

Stratis uses stored metadata to recognize managed pools, volumes, and file systems. Manually configuring Stratis file systems with non-Stratis commands can result in overwriting that metadata, and can prevent Stratis from recognizing the file system volumes that it previously created.

Install and Enable Stratis

To use Stratis, ensure that your system has the required software and that the stratisd service is running. Install the stratis-cli and stratisd packages, and start and enable the stratisd service.

[root@host ~]# dnf install stratis-cli stratisd
...output omitted...
Is this ok [y/N]: y
...output omitted...
Complete!
[root@host ~]# systemctl enable --now stratisd

Create Stratis Pools

Create pools of one or more block devices by using the stratis pool create command. Then, use the stratis pool list command to view the list of available pools.

[root@host ~]# stratis pool create pool1 /dev/vdb
[root@host ~]# stratis pool list
Name                  Total Physical   Properties            UUID
pool1   5 GiB / 37.63 MiB / 4.96 GiB      ~Ca,~Cr   11f6f3c5-5...

Warning

The stratis pool list command displays the storage space in use and the available pool space. Currently, if a pool becomes full, then further data that is written to the pool's file systems is quietly discarded.

Use the stratis pool add-data command to add block devices to a pool. Then, use the stratis blockdev list command to verify the block devices of a pool.

[root@host ~]# stratis pool add-data pool1 /dev/vdc
[root@host ~]# stratis blockdev list pool1
Pool Name   Device Node   Physical Size   Tier
pool1       /dev/vdb              5 GiB   Data
pool1       /dev/vdc              5 GiB   Data

Manage Stratis File Systems

Use the stratis filesystem create command to create a file system from a pool. The links to the Stratis file systems are in the /dev/stratis/pool1 directory. Use the stratis filesystem list command to view the list of available file systems.

[root@host ~]# stratis filesystem create pool1 fs1
[root@host ~]# stratis filesystem list
Pool Name   Name   Used      Created             Device                   UUID
pool1       fs1    546 MiB   Apr 08 2022 04:05   /dev/stratis/pool1/fs1   c7b5719...

Create a Stratis file system snapshot by using the stratis filesystem snapshot command. Snapshots are independent of the source file systems. Stratis dynamically allocates the snapshot storage space, and uses an initial 560 MB to store the file system's journal.

[root@host ~]# stratis filesystem snapshot pool1 fs1 snapshot1

Persistently Mount Stratis File Systems

You can persistently mount Stratis file systems by editing the /etc/fstab file and specifying the details of the file system. Use the lsblk command to display the UUID of the file system to use in the /etc/fstab file to identify the file system. You can also use the stratis filesystem list command to obtain the UUID of the file system.

[root@host ~]# lsblk --output=UUID /dev/stratis/pool1/fs1
UUID
c7b57190-8fba-463e-8ec8-29c80703d45e

The following example shows an entry in the /etc/fstab file to mount a Stratis file system persistently. This example entry is a single long line in the file. The x-systemd.requires=stratisd.service mount option delays mounting the file system until the systemd daemon starts the stratisd service during the boot process.

UUID=c7b57190-8fba-463e-8ec8-29c80703d45e /dir1 xfs defaults,x-systemd.requires=stratisd.service 0 0

Important

If you do not include the x-systemd.requires=stratisd.service mount option in the /etc/fstab file for each Stratis file system, then the machine fails to start correctly, and aborts to emergency.target the next time that you reboot it.

Warning

Do not use the df command to query Stratis file system space.

The df command reports that any mounted Stratis-managed XFS file system is 1 TiB, regardless of the current allocation. Because the file system is thinly provisioned, a pool might not have enough physical storage to back the entire file system. Other file systems in the pool might use up all the available storage.

Therefore, it is possible to consume the whole storage pool, even if the df command reports that the file system has available space. Writes to a file system with no available pool storage can fail.

Instead, always use the stratis pool list command to monitor a pool's available storage accurately.

Revision: rh134-9.0-fa57cbe