RHCSA Rapid Track
Course update
An updated version of this course is available that uses a newer version of Red Hat Enterprise Linux in the lab environment. Therefore, the RHEL 9.0 version of the lab environment will retire on December 31, 2024. Please complete any work in this lab environment before it is removed on December 31, 2024. For the most up-to-date version of this course, we recommend moving to the RHEL 9.3 version.
Provide persistent storage for container data by sharing storage from the container host, and configure a container network.
You can use containers to run a simple process and exit.
You can also configure a container to run a service continuously, such as a database server. If you run a service continuously, you might eventually need to add more resources to the container, such as persistent storage or access to more networks.
You can use different strategies to configure persistent storage for containers:
For large deployments on an enterprise container platform, such as Red Hat OpenShift, you can use sophisticated storage solutions to provide storage to your containers without knowing the underlying infrastructure.
For small deployments on a single container host, and without a need to scale, you can create persistent storage from the container host by creating a directory to mount on the running container.
When a container, such as a web server or database server, serves content for clients outside the container host, you must set up a communication channel for those clients to access the content of the container. You can configure port mapping to enable communication to a container. With port mapping, the requests that are destined for a port on the container host are forwarded to a port inside the container.
Imagine that you must perform the following tasks:
Create a containerized database named
db01, which is based on MariaDB.Configure the container port mapping and host firewall to allow traffic on port 3306/tcp.
Configure the
db01container to use persistent storage with the appropriate SELinux context.Add the appropriate network configuration so that the
client01container can communicate with thedb01container by using DNS.
Some container images enable passing environment variables to customize the container at creation time. You can use environment variables to set parameters to the container to tailor for your environment without the need to create your own custom image. Usually, you would not modify the container image, because it would add layers to the image, which might be harder to maintain.
You use the podman run -d registry.lab.example.com/rhel8/mariadb-105 command to run a containerized database, but you notice that the container fails to start.
[user@host ~]$podman run -d registry.lab.example.com/rhel8/mariadb-105 \--name db0120751a03897f14764fb0e7c58c74564258595026124179de4456d26c49c435ad [user@host ~]$podman ps -aCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 20751a03897f registry.lab.example.com/rhel8/mariadb-105:latest run-mysqld 29 seconds ago Exited (1) 29 seconds ago db01
You use the podman container logs command to investigate the reason of the container status.
[user@host ~]$ podman container logs db01
...output omitted...
You must either specify the following environment variables:
MYSQL_USER (regex: '^[a-zA-Z0-9_]+$')
MYSQL_PASSWORD (regex: '^[a-zA-Z0-9_~!@#$%^&*()-=<>,.?;:|]+$')
MYSQL_DATABASE (regex: '^[a-zA-Z0-9_]+$')
Or the following environment variable:
MYSQL_ROOT_PASSWORD (regex: '^[a-zA-Z0-9_~!@#$%^&*()-=<>,.?;:|]+$')
Or both.
...output omitted...From the preceding output, you determine that the container did not continue to run, because the required environment variables were not passed to the container.
So you inspect the mariadb-105 container image to find more information about the environment variables to customize the container.
[user@host ~]$skopeo inspect docker://registry.lab.example.com/rhel8/mariadb-105...output omitted... "name": "rhel8/mariadb-105", "release": "40.1647451927", "summary": "MariaDB 10.5 SQL database server","url": "https://access.redhat.com/containers/#/registry.access.redhat.com/rhel8/mariadb-105/images/1-40.1647451927","usage": "podman run -d -e MYSQL_USER=user -e MYSQL_PASSWORD=pass -e MYSQL_DATABASE=db -p 3306:3306 rhel8/mariadb-105","vcs-ref": "c04193b96a119e176ada62d779bd44a0e0edf7a6", "vcs-type": "git", "vendor": "Red Hat, Inc.", ...output omitted...
The usage label from the output provides an example of how to run the image.
The url label points to a web page in the Red Hat Container Catalog that documents environment variables and other information about how to use the container image.
The documentation for this image shows that the container uses the 3306 port for the database service. The documentation also shows that the following environment variables are available to configure the database service:
Table 16.2. Environment Variables for the mariadb Image
| Variable | Description |
|---|---|
MYSQL_USER
| Username for the MySQL account to create |
MYSQL_PASSWORD
| Password for the user account |
MYSQL_DATABASE
| Database name |
MYSQL_ROOT_PASSWORD
| Password for the root user (optional) |
After examining the available environment variables for the image, you use the podman run command -e option to pass environment variables to the container, and use the podman ps command to verify that it is running.
[user@host ~]$podman run -d --name db01 \-e MYSQL_USER=student \-e MYSQL_PASSWORD=student \-e MYSQL_DATABASE=dev_data \-e MYSQL_ROOT_PASSWORD=redhat \registry.lab.example.com/rhel8/mariadb-105[user@host ~]$podman psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4b8f01be7fd6 registry.lab.example.com/rhel8/mariadb-105:latest run-mysqld 6 seconds ago Up 6 seconds ago db01
By default, when you run a container, all of the content uses the container-based image. Given the ephemeral nature of container images, all of the new data that the user or the application writes is lost after removing the container.
To persist data, you can use host file-system content in the container with the --volume (-v) option.
You must consider file-system level permissions when you use this volume type in a container.
In the MariaDB container image, the mysql user must own the /var/lib/mysql directory, the same as if MariaDB was running on the host machine.
The directory to mount into the container must have mysql as the user and group owner (or the UID and GID of the mysql user, if MariaDB is not installed on the host machine).
If you run a container as the root user, then the UIDs and GIDs on your host machine match the UIDs and GIDs inside the container.
The UID and GID matching configuration does not occur the same way in a rootless container. In a rootless container, the user has root access from within the container, because Podman launches a container inside the user namespace.
You can use the podman unshare command to run a command inside the user namespace.
To obtain the UID mapping for your user namespace, use the podman unshare cat command.
[user@host ~]$podman unshare cat /proc/self/uid_map0 1000 1 1 100000 65536 [user@host ~]$podman unshare cat /proc/self/gid_map0 1000 1 1 100000 65536
The preceding output shows that in the container, the root user (UID and GID of 0) maps to your user (UID and GID of 1000) on the host machine.
In the container, the UID and GID of 1 maps to the UID and GID of 100000 on the host machine.
Every UID and GID after 1 increments by 1.
For example, the UID and GID of 30 inside a container maps to the UID and GID of 100029 on the host machine.
You use the podman exec command to view the mysql user UID and GID inside the container that is running with ephemeral storage.
[user@host ~]$podman exec -it db01 grep mysql /etc/passwdmysql:x:27:27:MySQL Server:/var/lib/mysql:/sbin/nologin
You decide to mount the /home/user/db_data directory into the db01 container to provide persistent storage on the /var/lib/mysql directory of the container.
You then create the /home/user/db_data directory, and use the podman unshare command to set the user namespace UID and GID of 27 as the owner of the directory.
[user@host ~]$mkdir /home/user/db_data[user@host ~]$podman unshare chown 27:27 /home/user/db_data
The UID and GID of 27 in the container maps to the UID and GID of 100026 on the host machine.
You can verify the mapping by viewing the ownership of the /home/user/db_data directory with the ls command.
[user@host ~]$ ls -l /home/user/
total 0
drwxrwxr-x. 3 100026 100026 18 May 5 14:37 db_data
...output omitted...Now that the correct file-system level permissions are set, you use the podman run command -v option to mount the directory.
[user@host ~]$podman run -d --name db01 \-e MYSQL_USER=student \-e MYSQL_PASSWORD=student \-e MYSQL_DATABASE=dev_data \-e MYSQL_ROOT_PASSWORD=redhat \-v /home/user/db_data:/var/lib/mysql \registry.lab.example.com/rhel8/mariadb-105
You notice that the db01 container is not running.
[user@host ~]$ podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dfdc20cf9a7e registry.lab.example.com/rhel8/mariadb-105:latest run-mysqld 29 seconds ago Exited (1) 29 seconds ago db01The podman container logs command shows a permission error for the /var/lib/mysql/db_data directory.
[user@host ~]$ podman container logs db01
...output omitted...
---> 16:41:25 Initializing database ...
---> 16:41:25 Running mysql_install_db ...
mkdir: cannot create directory '/var/lib/mysql/db_data': Permission denied
Fatal error Can't create database directory '/var/lib/mysql/db_data'This error happens because of the incorrect SELinux context that is set on the /home/user/db_data directory on the host machine.
You must set the container_file_t SELinux context type before you can mount the directory as persistent storage to a container.
If the directory does not have the container_file_t SELinux context, then the container cannot access the directory.
You can append the Z option to the argument of the podman run command -v option to automatically set the SELinux context on the directory.
So you use the podman run -v /home/user/db_data:/var/lib/mysql:Z command to set the SELinux context for the /home/user/db_data directory when you mount it as persistent storage for the /var/lib/mysql directory.
[user@host ~]$podman run -d --name db01 \-e MYSQL_USER=student \-e MYSQL_PASSWORD=student \-e MYSQL_DATABASE=dev_data \-e MYSQL_ROOT_PASSWORD=redhat \-v /home/user/db_data:/var/lib/mysql:Z \registry.lab.example.com/rhel8/mariadb-105
You then verify that the correct SELinux context is set on the /home/user/db_data directory with the ls command -Z option.
[user@host ~]$ ls -Z /home/user/
system_u:object_r:container_file_t:s0:c81,c1009 db_data
...output omitted...To provide network access to containers, clients must connect to ports on the container host that pass the network traffic through to ports in the container. When you map a network port on the container host to a port in the container, the container receives network traffic that is sent to the host network port.
For example, you can map the 13306 port on the container host to the 3306 port on the container for communication with the MariaDB container. Therefore, traffic that is sent to the container host port 13306 would be received by MariaDB that is running in the container.
You use the podman run command -p option to set a port mapping from the 13306 port from the container host to the 3306 port on the db01 container.
[user@host ~]$podman run -d --name db01 \-e MYSQL_USER=student \-e MYSQL_PASSWORD=student \-e MYSQL_DATABASE=dev_data \-e MYSQL_ROOT_PASSWORD=redhat \-v /home/user/db_data:/var/lib/mysql:Z \-p 13306:3306 \registry.lab.example.com/rhel8/mariadb-105
Use the podman port command -a option to show all container port mappings in use.
You can also use the podman port db01 command to show the mapped ports for the db01 container.
[user@host ~]$podman port -a1c22fd905120 3306/tcp -> 0.0.0.0:13306 [user@host ~]$podman port db013306/tcp -> 0.0.0.0:13306
You use the firewall-cmd command to allow port 13306 traffic into the container host machine to redirect to the container.
[root@host ~]#firewall-cmd --add-port=13306/tcp --permanent[root@host ~]#firewall-cmd --reload
Important
A rootless container cannot open a privileged port (ports below 1024) on the container.
That is, the podman run -p 80:8080 command does not normally work for a running rootless container.
To map a port on the container host below 1024 to a container port, you must run Podman as root or otherwise adjust the system.
You can map a port above 1024 on the container host to a privileged port on the container, even if you are running a rootless container.
The 8080:80 mapping works if the container provides service listening on port 80.
Podman v4.0 supports two network back ends for containers, Netavark and CNI.
Starting with RHEL 9, systems use Netavark by default.
To verify which network back end is used, run the following podman info command.
[user@host ~]$ podman info --format {{.Host.NetworkBackend}}
netavarkNote
The container-tools meta-package includes the netavark and aardvark-dns packages.
If Podman was installed as a stand-alone package, or if the container-tools meta-package was installed later, then the result of the previous command might be cni.
To change the network back end, set the following configuration in the /usr/share/containers/containers.conf file:
[network]
...output omitted...
network_backend = "netavark"Existing containers on the host that use the default Podman network cannot resolve each other's hostnames, because DNS is not enabled on the default network.
Use the podman network create command to create a DNS-enabled network.
You use the podman network create command to create the network called db_net, and specify the subnet as 10.87.0.0/16 and the gateway as 10.87.0.1.
[user@host ~]$ podman network create --gateway 10.87.0.1 \
--subnet 10.87.0.0/16 db_net
db_netIf you do not specify the --gateway or --subnet options, then they are created with the default values.
The podman network inspect command displays information about a specific network.
You use the podman network inspect command to verify that the gateway and subnet were correctly set and that the new db_net network is DNS-enabled.
[user@host ~]$ podman network inspect db_net
[
{
"name": "db_net",
...output omitted...
"subnets": [
{
"subnet": "10.87.0.0/16",
"gateway": "10.87.0.1"
}
],
...output omitted...
"dns_enabled": true,
...output omitted...You can add the DNS-enabled db_net network to a new container with the podman run command --network option.
You use the podman run command --network option to create the db01 and client01 containers that are connected to the db_net network.
[user@host ~]$podman run -d --name db01 \-e MYSQL_USER=student \-e MYSQL_PASSWORD=student \-e MYSQL_DATABASE=dev_data \-e MYSQL_ROOT_PASSWORD=redhat \-v /home/user/db_data:/var/lib/mysql:Z \-p 13306:3306 \--network db_net \registry.lab.example.com/rhel8/mariadb-105[user@host ~]$podman run -d --name client01 \--network db_net \registry.lab.example.com/ubi8/ubi:latest \sleep infinity
Because containers are designed to have only the minimum required packages, the containers might not have the required utilities to test communication, such as the ping and ip commands.
You can install these utilities in the container by using the podman exec command.
[user@host ~]$podman exec -it db01 dnf install -y iputils iproute...output omitted... [user@host ~]$podman exec -it client01 dnf install -y iputils iproute...output omitted...
The containers can now ping each other by container name.
You test the DNS resolution with the podman exec command.
The names resolve to IPs within the subnet that was manually set for the db_net network.
[user@host ~]$podman exec -it db01 ping -c3 client01PING client01.dns.podman (10.87.0.4) 56(84) bytes of data. 64 bytes from 10.87.0.4 (10.87.0.4): icmp_seq=1 ttl=64 time=0.049 ms ...output omitted... --- client01.dns.podman ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2007ms rtt min/avg/max/mdev = 0.049/0.060/0.072/0.013 ms [user@host ~]$podman exec -it client01 ping -c3 db01PING db01.dns.podman (10.87.0.3) 56(84) bytes of data. 64 bytes from 10.87.0.3 (10.87.0.3): icmp_seq=1 ttl=64 time=0.021 ms ...output omitted... --- db01.dns.podman ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2047ms rtt min/avg/max/mdev = 0.021/0.040/0.050/0.013 ms
You verify that the IP addresses in each container match the DNS resolution with the podman exec command.
[user@host ~]$podman exec -it db01 ip a | grep 10.8inet 10.87.0.3/16 brd 10.87.255.255 scope global eth0 inet 10.87.0.4/16 brd 10.87.255.255 scope global eth0 [user@host ~]$podman exec -it client01 ip a | grep 10.8inet 10.87.0.3/16 brd 10.87.255.255 scope global eth0 inet 10.87.0.4/16 brd 10.87.255.255 scope global eth0
Multiple networks can be connected to a container at the same time to help to separate different types of traffic.
You use the podman network create command to create the backend network.
[user@host ~]$ podman network create backendYou then use the podman network ls command to view all the Podman networks.
[user@host ~]$ podman network ls
NETWORK ID NAME DRIVER
a7fea510a6d1 backend bridge
fe680efc5276 db01 bridge
2f259bab93aa podman bridgeThe subnet and gateway were not specified with the podman network create command --gateway and --subnet options.
You use the podman network inspect command to obtain the IP information of the backend network.
[user@host ~]$ podman network inspect backend
[
{
"name": "backend",
...output omitted...
"subnets": [
{
"subnet": "10.89.1.0/24",
"gateway": "10.89.1.1"
...output omitted...You can use the podman network connect command to connect additional networks to a container when it is running.
You use the podman network connect command to connect the backend network to the db01 and client01 containers.
[user@host ~]$podman network connect backend db01[user@host ~]$podman network connect backend client01
Important
If a network is not specified with the podman run command, then the container connects to the default network.
The default network uses the slirp4netns network mode, and the networks that you create with the podman network create command use the bridge network mode.
If you try to connect a bridge network to a container by using the slirp4netns network mode, then the command fails:
Error: "slirp4netns" is not supported: invalid network mode
You use the podman inspect command to verify that both networks are connected to each container and to display the IP information.
[user@host ~]$podman inspect db01...output omitted... "backend": { "EndpointID": "", "Gateway": "10.89.1.1", "IPAddress": "10.89.1.4", ...output omitted... }, "db_net": { "EndpointID": "", "Gateway": "10.87.0.1", "IPAddress": "10.87.0.3", ...output omitted... [user@host ~]$podman inspect client01...output omitted... "backend": { "EndpointID": "", "Gateway": "10.89.1.1", "IPAddress": "10.89.1.5", ...output omitted... }, "db_net": { "EndpointID": "", "Gateway": "10.87.0.1", "IPAddress": "10.87.0.4", ...output omitted...
The client01 container can now communicate with the db01 container on both networks.
You use the podman exec command to ping both networks on the db01 container from the client01 container.
[user@host ~]$podman exec -it client01 ping -c3 10.89.1.4 | grep 'packet loss'3 packets transmitted, 3 received, 0% packet loss, time 2052ms [user@host ~]$podman exec -it client01 ping -c3 10.87.0.3 | grep 'packet loss'3 packets transmitted, 3 received, 0% packet loss, time 2054ms
References
podman(1), podman-exec(1), podman-info(1), podman-network(1), podman-network-create(1), podman-network-inspect(1), podman-network-ls(1), podman-port(1), podman-run(1), and podman-unshare(1) man pages
For more information, refer to the Working with Containers chapter in the Building, Running, and Managing Linux Containers on Red Hat Enterprise Linux 9 guide at https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html-single/building_running_and_managing_containers/assembly_working-with-containers_building-running-and-managing-containers