Bookmark this page

Chapter 5. Configuring Virtual Machine System Disks

Abstract

Goal Identify the available choices for configuring, storing and selecting block-based virtual machine system disks, including the choice of ephemeral or persistent disks for specific use cases.
Objectives
  • Describe the purpose, use cases and storage choices when selecting ephemeral disks for instances.

  • Describe the purpose, use cases and storage choices when selecting persistent volumes for instances.

  • Manage block-based storage elements and activities for common application data use cases.

Sections
  • Configuring Ephemeral Disks (and Guided Exercise)

  • Configuring Persistent Disks (and Guided Exercise)

  • Managing Volumes and Snapshots (and Guided Exercise)

Lab

Configuring Virtual Machine System Disks

Configuring Ephemeral Disks

Objectives

After completing this section, you should be able to describe the purpose, use cases and storage choices when selecting ephemeral disks for instances.

Storage in Red Hat OpenStack Platform

A cloud environment application should take advantage of cloud environment benefits, such as using the scalability of compute and storage resources in Red Hat OpenStack Platform (RHOSP). By default, RHOSP uses Ceph as the back end for the Block Storage service, but it also supports integration with existing enterprise-level storage systems such as Storage Area Networking (SAN), Network Attached Storage (NAS), and Direct Attached Storage (DAS).

In a physical enterprise environment, servers are typically installed with direct attached storage drives, and use external storage for scaling and resource sharing. In cloud-based instances, virtual disks can be directly attached, and external shared storage is provided as a way to scale the local storage. In a self-service cloud environment, storage is a key resource to be managed so that the maximum number of users can take advantage of it.

Without the Block Storage service, all instance disks are ephemeral, meaning that any storage resources are discarded when the instance is terminated. Ephemeral storage includes block disk devices and swap devices used in a deployed instance.

As a Domain Operator, you should understand the features of persistent and ephemeral storage so that you can advise your OpenStack users.

To scale an instance's storage, provision additional virtual disks using the Block Storage service, Object Storage service, or the Shared File Systems service. Storage resources provided by these services may be persistent; they remain after the instance is terminated. RHOSP supports different storage resources providing persistent storage, including volumes, object containers, and shares.

Managing Volumes

Volumes are the common way to provide persistent storage to instances, and are managed by the Block Storage service. Like physical machines, volumes are presented as raw devices to the instance's operating system, and can be formatted and mounted for use. A volume in OpenStack can be implemented as different volume types, specified by the backing storage infrastructure or device. A volume can be attached to more than one instance at a time, and can also be moved between instances.

Legacy servers may require that the system (root) disk be persistent. Default ephemeral disks are not able to provide this requirement, but root disks can be created from an existing prebuilt, bootable volume. This is possible because RHOSP supports the creation of bootable volumes based on images managed by the Image service.

Object Containers

Red Hat OpenStack Platform also includes an Object Storage service, which allows storing files as objects. These objects are collected in containers, on which certain access permissions can be configured. This persistent storage is accessible using an API, so it is well suited for cloud users to upload their data to instances.

Manila Shares

In previous versions of OpenStack, a distributed file system had to be created on top of several volumes to share data among several instances at the same time. The Shared File Systems service (Manila) supports the provisioning of shares that can be mounted on several instances at the same time.

Describing Block Storage

Block storage uses volumes as its storage unit, which requires that the volume be attached to an instance in order to be accessed. Object storage uses object containers, composed of files and folders, as its storage unit. All objects can be accessed using an API. Object storage does not require an instance to be accessible, but objects can be accessed from inside instances.

The use of block storage in OpenStack depends on the back-end storage infrastructure. Depending on back-end storage performance, the block storage service can be suitable for high throughput use cases. Currently, the OpenStack Block Storage service supports both Red Hat Ceph Storage and NFS as back ends, and provides drivers allowing native interaction with many common SAN vendors. Volumes are directly served from those infrastructures.

Generically, block storage is well suited to the following use cases:

  • Extra space to store data that might need to be persistent or ephemeral.

  • A distributed file system based on raw devices distributed across different instances.

  • Back-end storage for critical cloud-based applications such as distributed databases.

Recommended Practices for Block Storage

In general, Red Hat recommends the following practices for block storage in OpenStack:

  • Avoid using the LVM as the primary storage in production environments. Red Hat does not support LVM as a primary block storage back end.

  • Use LVM to manage instance virtual disks on compute nodes.

  • Configure a suitable storage back end based on workload requirements.

  • Configure multiple back ends to use your legacy storage as storage tiers.

  • Configure the storage scheduler to allocate volumes on back ends based on volume requirements.

Back-end Files for Ephemeral Storage

Ephemeral storage resources for an instance are defined by the flavor used to create the instance. OpenStack flavors currently support the definition of three resources, providing non-persistent storage inside of an instance. Those resources are a root disk, an ephemeral disk, and a swap disk. Each of these resources are mapped as devices to the instances, which the cloud-init process configures during the boot process, according to flavor specifications. To properly configure those resources, instances must have access to the metadata service provided by the Compute service.

The back-end Ceph RBD images for the different ephemeral storage resources are created on Ceph when instances are deployed. These RBD images use the instance ID as a prefix to their name. The following RBD images are created when an instance is deployed and its associated flavor has a root disk, an ephemeral disk, and swap memory defined.

Table 5.1. Back-end Files for Ephemeral Storage Resources

File nameResourceDescription
9d5164a5-e409-4409-b3a0-779e0b90dec9_disk Root diskOperating system
9d5164a5-e409-4409-b3a0-779e0b90dec9_disk.eph0 Ephemeral diskAdditional space
9d5164a5-e409-4409-b3a0-779e0b90dec9_disk.swap Swap diskSwap memory

Ceph has several pools to support various OpenStack services and their functions.

images

The images pool provides storage for the Image service, storing bootable operating system images. When an instance is launched, the image it requires is copied to the appropriate compute node and cached as a base image. An overlay image is created for the instance to write to, ensuring that no changes are made to the base image. Additional instances that use the same image will launch more quickly because the base image is already present.

volumes

The volumes pool supports the Block Storage service, storing persistent and ephemeral volumes as they are created.

vms

The vms pool supports the Compute service, allowing instance disks to be stored in Ceph instead of in the compute node's local storage. Storing the disk image centrally allows for recovery in the event of a compute node failure, and faster evacuation when performing compute node maintenance.

When an instance is terminated, the back-end RBD images for associated ephemeral storage resources are deleted. This behavior, which is common in cloud computing environments, contrasts markedly with physical servers, where associated local storage is persistent. This supports the cloud computing concept of self-service access to hardware resources, so that unused hardware resources are freed up when they are no longer needed. Instances are designed for use as on-demand processing, with ephemeral storage as a dynamic workspace to facilitate immediate processing. When the processing has finished, the instances and their workspace are no longer needed and the ephemeral storage resources are removed.

Ephemeral storage resources for an instance are defined in the flavor used to create that instance. The size of the root disk, ephemeral disk, and swap disk are defined by a flavor. Although defining a root disk size is mandatory, the ephemeral disk and swap disk are optional. If either disk is defined with a size greater than zero, that disk is created during the instance deployment. Using unnecessarily large ephemeral or swap disks affects the availability of resources on the compute node where an instance is deployed and the optimal usage of cloud resources.

Root Disk Management

When the instance is deployed, the root disk is typically created as a copy-on-write clone of the RBD image containing the image used by the instance. When the Compute service has a default (non Red Hat) configuration, the original image is managed by the Image service, and a copy is stored as a libvirt base image on the compute node where the instance is deployed. The root disk is mounted as the first available device in the instance, typically /dev/vda or /dev/sda in Red Hat Enterprise Linux based instances.

In RHOSP, the image is stored in the Ceph images pool. When an instance is launched, an overlay file is created in the vms pool. When the instance is deleted, only the overlay file is deleted.

Ephemeral Disk Management

The ephemeral disk is mapped to the instance as a raw device. Commonly, it is mapped as the second available device, as either /dev/vdb or /dev/sdb in RHEL-based instances. The cloud-init process configures this device with a file system and mounts it on the /mnt directory in the instance. The choice of file-system type and mount point used by cloud-init is configurable.

Swap Disk Management

The swap disk is also mapped to the instance as a raw device. It is mapped to the instance as the next device available, either as /dev/vdc or /dev/sdc in RHEL-based instances when both a root disk and an ephemeral disk are also configured. The cloud-init process configures this device as swap and enables it as swap memory in the instance.

Note

If the metadata service is not available from the instance, cloud-init cannot prepare the ephemeral and swap disks, but the disks can still be formatted, mounted, and configured manually, as needed.

Reviewing an Ephemeral Storage Use Case

Cassandra is an Eventually Consistent database, ideal for distributed architectures. Given that Cassandra is flexible about node loss, each application instance could also be a database node using ephemeral disks. When launched, the instance would join the Cassandra cluster, and replicate data from the existing nodes. This design removes the need for centralized persistent storage, potentially using fast solid state drives on the compute nodes.

The benefits of this design might include:

  • No dependency on the performance of, or network bandwidth to a centralized database.

  • In the event of a node failure, only the current transaction would be lost.

  • Using ephemeral volumes means that there is no requirement to back up the instance, or to attempt recovery if it fails. A monitoring system could delete the failed instance and launch a replacement without human interaction.

Figure 5.1: Cassandra on ephemeral storage

 

References

Further information is available in the the Storage Guide for Red Hat OpenStack Platform at https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html-single/storage_guide/index

Revision: cl110-16.1-4c76154