Partition storage devices, configure LVM, format partitions or logical volumes, mount file systems, and add swap spaces.
Use the ansible.posix.mount module to mount an existing file system.
The most common parameters for the ansible.posix.mount module are the path parameter, which specifies the path to mount the file system to, the src parameter, which specifies the device (this could be a device name, UUID, or NFS volume), the fstype parameter, which specifies the file system type, and the state parameter, which accepts the absent, mounted, present, unmounted, or remounted values.
The following example task mounts the NFS share available at 172.25.250.100:/share on the /nfsshare directory on the managed hosts.
- name: Mount NFS share
ansible.posix.mount:
path: /nfsshare
src: 172.25.250.100:/share
fstype: nfs
opts: defaults
dump: '0'
passno: '0'
state: mountedRed Hat Ansible Automation Platform provides the redhat.rhel_system_roles.storage system role to configure local storage devices on your managed hosts.
It can manage file systems on unpartitioned block devices, and format and create logical volumes on LVM physical volumes based on unpartitioned block devices.
The redhat.rhel_system_roles.storage role formally supports managing file systems and mount entries for two use cases:
Unpartitioned devices (whole-device file systems)
LVM on unpartitioned whole-device physical volumes
If you have other use cases, then you might need to use other modules and roles to implement them.
To create a file system on an unpartitioned block device with the redhat.rhel_system_roles.storage role, define the storage_volumes variable.
The storage_volumes variable contains a list of storage devices to manage.
The following dictionary items are available in the storage_volumes variable:
Table 9.6. Parameters for the storage_volumes Variable
| Parameter | Comments |
|---|---|
name
| The name of the volume. |
type
| This value must be disk. |
disks
| Must be a list of exactly one item; the unpartitioned block device. |
mount_point
| The directory on which the file system is mounted. |
fstype
| The file system type to use. (xfs, ext4, or swap.) |
mount_options
| Custom mount options, such as ro or rw. |
The following example play creates an XFS file system on the /dev/vdg device, and mounts it on /opt/extra.
- name: Example of a simple storage device
hosts: all
roles:
- name: redhat.rhel_system_roles.storage
storage_volumes:
- name: extra
type: disk
disks:
- /dev/vdg
fs_type: xfs
mount_point: /opt/extraTo create an LVM volume group with the redhat.rhel_system_roles.storage role, define the storage_pools variable.
The storage_pools variable contains a list of pools (LVM volume groups) to manage.
The dictionary items inside the storage_pools variable are used as follows:
The name variable is the name of the volume group.
The type variable must have the value lvm.
The disks variable is the list of block devices that the volume group uses for its storage.
The volumes variable is the list of logical volumes in the volume group.
The following entry creates the volume group vg01 with the type key set to the value lvm.
The volume group's physical volume is the /dev/vdb disk.
---
- name: Configure storage on webservers
hosts: webservers
roles:
- name: redhat.rhel_system_roles.storage
storage_pools:
- name: vg01
type: lvm
disks:
- /dev/vdbThe disks option only supports unpartitioned block devices for your LVM physical volumes.
To create logical volumes, populate the volumes variable, nested under the storage_pools variable, with a list of logical volume names and their parameters.
Each item in the list is a dictionary that represents a single logical volume within the storage_pools variable.
Each logical volume list item has the following dictionary variables:
name: The name of the logical volume.
size: The size of the logical volume.
mount_point: The directory used as the mount point for the logical volume's file system.
fs_type: The logical volume's file system type.
state: Whether the logical volume should exist using the present or absent values.
The following example creates two logical volumes, named lvol01 and lvol02.
The lvol01 logical volume is 128 MB in size, formatted with the xfs file system, and is mounted at /data.
The lvol02 logical volume is 256 MB in size, formatted with the xfs file system, and is mounted at /backup.
---
- name: Configure storage on webservers
hosts: webservers
roles:
- name: redhat.rhel_system_roles.storage
storage_pools:
- name: vg01
type: lvm
disks:
- /dev/vdb
volumes:
- name: lvol01
size: 128m
mount_point: "/data"
fs_type: xfs
state: present
- name: lvol02
size: 256m
mount_point: "/backup"
fs_type: xfs
state: presentIn the following example entry, if the lvol01 logical volume is already created with a size of 128 MB, then the logical volume and file system are enlarged to 256 MB, assuming that the space is available within the volume group.
volumes:
- name: lvol01
size: 256m
mount_point: "/data"
fs_type: xfs
state: presentYou can use the redhat.rhel_system_roles.storage role to create logical volumes that are formatted as swap spaces.
The role creates the logical volume, the swap file system type, adds the swap volume to the /etc/fstab file, and enables the swap volume immediately.
The following playbook example creates the lvswap logical volume in the vgswap volume group, adds the swap volume to the /etc/fstab file, and enables the swap space.
---
- name: Configure a swap volume
hosts: all
roles:
- name: redhat.rhel_system_roles.storage
storage_pools:
- name: vgswap
type: lvm
disks:
- /dev/vdb
volumes:
- name: lvswap
size: 512m
fs_type: swap
state: presentYou can manage partitions and file systems on your storage devices without using the system role. However, the most convenient modules for doing this are currently unsupported by Red Hat, which can make this more complicated.
Be careful when you use Ansible to partition and format file systems, especially if you use ansible.builtin.command tasks.
Mistakes in your code on important systems can result in lost data.
If you want to partition your storage devices without using the system role, your options are a bit more complex.
The unsupported community.general.parted module in the community.general Ansible Content Collection can perform this task.
You can use the ansible.builtin.command module to run the partitioning commands on the managed hosts.
However, you need to take special care to make sure the commands are idempotent and do not inadvertently destroy data on your existing storage devices.
For example, the following task creates a GPT disk label and a /dev/sda1 partition on the /dev/sda storage device only if /dev/sda1 does not already exist:
- name: Ensure that /dev/sda1 exists
ansible.builtin.command:
cmd: parted --script mklabel gpt mkpart primary 1MiB 100%
creates: /dev/sda1This depends on the fact that if the /dev/sda1 partition exists, then a Linux system creates a /dev/sda1 device file for it automatically.
The easiest way to manage file systems without using the system role might be the community.general.filesystem module.
However, Red Hat does not support this module, so you use it at your own risk.
As an alternative, you can use the ansible.builtin.command module to run commands to format file systems.
However, you should use some mechanism to make sure that the device you are formatting does not already contain a file system, to ensure idempotency of your play, and to avoid accidental data loss.
One way to do that might be to review storage-related facts gathered by Ansible to determine if a device appears to be formatted with a file system.
Ansible facts gathered by ansible.builtin.setup contain useful information about the storage devices on your managed hosts.
The ansible_facts['devices'] fact includes information about all the storage devices available on the managed host.
This includes additional information such as the partitions on each device, or each device's total size.
The following playbook gathers and displays the ansible_facts['devices'] fact for each managed host.
---
- name: Display storage facts
hosts: all
tasks:
- name: Display device facts
ansible.builtin.debug:
var: ansible_facts['devices']This fact contains a dictionary of variables named for the devices on the system.
Each named device variable itself has a dictionary of variables for its value, which represent information about the device.
For example, if you have the /dev/sda device on your system, you can use the following Jinja2 expression (all on one line) to determine its size in bytes:
{{ ansible_facts['devices']['sda']['sectors'] * ansible_facts['devices']['sda']['sectorsize'] }}Table 9.7. Selected Facts from a Device Variable Dictionary
| Fact | Comments |
|---|---|
host
| A string that identifies the controller to which the block device is connected. |
model
| A string that identifies the model of the storage device, if applicable. |
partitions
| A dictionary of block devices that are partitions on this device.
Each dictionary variable has as its value a dictionary structured like any other device (including values for sectors, size, and so on). |
sectors
| The number of storage sectors the device contains. |
sectorsize
| The size of each sector in bytes. |
size
| A human-readable rough calculation of the device size. |
For example, you could find the size of /dev/sda1 from the following fact:
ansible_facts['devices']['sda']['partitions']['sda1']['size']
The ansible_facts['device_links'] fact includes all the links available for each storage device.
If you have multipath devices, you can use this to help determine which devices are alternative paths to the same storage device, or are multipath devices.
The following playbook gathers and displays the ansible_['device_links'] fact for all managed hosts.
---
- name: Gather device link facts
hosts: all
tasks:
- name: Display device link facts
ansible.builtin.debug:
var: ansible_facts['device_links']The ansible_facts['mounts'] fact provides information about the currently mounted devices on the managed host.
For each device, this includes the mounted block device, its file system's mount point, mount options, and so on.
The following playbook gathers and displays the ansible_facts['mounts'] fact for managed hosts.
---
- name: Gather mounts
hosts: all
tasks:
- name: Display mounts facts
ansible.builtin.debug:
var: ansible_facts['mounts']The fact contains a list of dictionaries for each mounted file system on the managed host.
Table 9.8. Selected Variables from the Dictionary in a Mounted File System List Item
| Variable | Comments |
|---|---|
mount
| The directory on which this file system is mounted. |
device
| The name of the block device that is mounted. |
fstype
| The type of file system the device is formatted with (such as xfs). |
options
| The current mount options in effect. |
size_total
| The total size of the device. |
size_available
| How much space is free on the device. |
block_size
| The size of blocks on the file system. |
block_total
| How many blocks are in the file system. |
block_available
| How many blocks are free in the file system. |
inode_available
| How many inodes are free in the file system. |
For example, you can determine the free space on the root (/) file system on each managed host with the following play:
- name: Print free space on / file system hosts: all gather_facts: truetasks: - name: Display free space ansible.builtin.debug: msg: > The root file system on {{ ansible_facts['fqdn'] }} has {{ item['block_available'] * item['block_size'] / 1000000 }} megabytes free.
loop: "{{ ansible_facts['mounts'] }}"
when: item['mount'] == '/'
Gather the facts automatically. | |
The math inside the second Jinja2 expression computes decimal megabytes of free space. | |
The loop iterates over every mounted file system in the list. | |
Conditionals are checked on every iteration of the loop. The loop is unrolled but the module is only run by the task when the conditional matches. |