Deploy a database that uses guaranteed high IOPS storage.
Outcomes
Inspect the Amazon Elastic Block Store (EBS) volume types that the nodes of a Red Hat OpenShift on AWS (ROSA) cluster use.
List the existing storage classes.
Create storage classes for I/O-intensive workloads.
Configure OpenShift workloads to request volumes of a specific storage class.
To perform this exercise, ensure that you have completed the section called “Guided Exercise: Configure Developer Self-service for a ROSA Cluster ”.
Procedure 2.3. Instructions
Verify that you are logged in to your ROSA cluster from the OpenShift CLI.
Open a command-line terminal on your system, and then run the oc whoami command to verify your connection to the ROSA cluster.
If the command succeeds, then skip to the next step.
$ oc whoami
wlombardoghThe username is different in your command output.
If the command returns an error, then reconnect to your ROSA cluster.
Run the rosa describe cluster command to retrieve the URL of the OpenShift web console.
$rosa describe cluster --cluster do120-cluster...output omitted... Console URL:https://console-openshift-console.apps.do120-cluster.jf96.p1.openshiftapps.com...output omitted...
The URL in the preceding output is different on your system.
Open a web browser, and then navigate to the OpenShift web console URL. Click . If you are not already logged in to GitHub, then provide your GitHub credentials.
Click your name in the upper right corner of the web console, and then click . If the login page is displayed, then click and use your GitHub credentials for authentication.
Click , and then copy the oc login --token command to the clipboard.
Paste the command into the command-line terminal, and then run the command.
$oc login --token=sha256~1NofZkVCi3qCBcBJGc6XiOJTK5SDXF2ZYwhAARx5yJg--server=https://api.do120-cluster.Logged into "https://api.do120-cluster.jf96.p1.openshiftapps.com:6443" as "wlombardogh" using the token provided. ...output omitted...jf96.p1.openshiftapps.com:6443
In the preceding command, the token and the URL are different on your system.
Review the EBS volume type that the cluster machines use for their disks.
List the machine resources in the openshift-machine-api namespace.
Machine resources describe the hosts that the cluster nodes use.
The machine names include the infra, master, or worker node types.
$oc get machine -n openshift-machine-apiNAME PHASE TYPE REGION ... do120-cluster-c8drv-infra-us-east-1a-qjw9w Running r5.xlarge us-east-1 ... do120-cluster-c8drv-infra-us-east-1a-rrm6c Running r5.xlarge us-east-1 ... do120-cluster-c8drv-master-0 Running m5.2xlarge us-east-1 ... do120-cluster-c8drv-master-1 Running m5.2xlarge us-east-1 ... do120-cluster-c8drv-master-2 Running m5.2xlarge us-east-1 ... do120-cluster-c8drv-worker-us-east-1a-brnvp Running m5.xlarge us-east-1 ... do120-cluster-c8drv-worker-us-east-1a-tnhfn Running m5.xlarge us-east-1 ...
The machine names in the preceding output are different on your system.
Select one of the control plane machines from the preceding list, and then retrieve its block device parameters.
Control plane machines have the master keyword in their names.
The machine uses the gp3 EBS volume type.
On a Microsoft Windows system, replace the line continuation character (\) in the following long command with the backtick (`) character, which is the line continuation character in PowerShell.
$oc get machine do120-cluster-c8drv-master-0 -n openshift-machine-api \-o jsonpath-as-json="{.spec.providerSpec.value.blockDevices}"[ [ { "ebs": { "encrypted": true, "iops": 0, "kmsKey": { "arn": "" }, "volumeSize": 350,"volumeType": "gp3"} } ] ]
Retrieve the block device parameters of one of the infrastructure machines.
The machine also uses the gp3 EBS volume type.
$oc get machine do120-cluster-c8drv-infra-us-east-1a-qjw9w\-n openshift-machine-api \-o jsonpath-as-json="{.spec.providerSpec.value.blockDevices}"[ [ { "ebs": { "encrypted": true, "iops": 0, "kmsKey": { "arn": "" }, "volumeSize": 300,"volumeType": "gp3"} } ] ]
Retrieve the block device parameters of one of the worker machines.
The machine also uses the gp3 EBS volume type.
$oc get machine do120-cluster-c8drv-worker-us-east-1a-brnvp\-n openshift-machine-api \-o jsonpath-as-json="{.spec.providerSpec.value.blockDevices}"[ [ { "ebs": { "encrypted": true, "iops": 0, "kmsKey": { "arn": "" }, "volumeSize": 300,"volumeType": "gp3"} } ] ]
You organization plans to deploy an I/O-intensive MariaDB database.
That workload requires an EBS volume of the io1 type.
Verify that no existing storage class provides this type, and that your AWS Region supports it.
List the storage classes in your cluster.
Notice that only the gp2 and gp3 types are available.
$ oc get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ...
gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer ...
gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer ...
gp3 (default) ebs.csi.aws.com Delete WaitForFirstConsumer ...
gp3-csi ebs.csi.aws.com Delete WaitForFirstConsumer ...List the availability zones in your AWS Region.
$ aws ec2 describe-availability-zones --query "AvailabilityZones[].ZoneName"
[
"us-east-1a",
"us-east-1b",
"us-east-1c",
"us-east-1d",
"us-east-1e",
"us-east-1f"
]The availability zones in the preceding output depend on your AWS Region, and might be different on your system.
Create an EBS volume of the io1 type in dry mode to verify that the AWS Region supports this type.
Use one of the availability zone for the --availability-zone option.
$aws ec2 create-volume --dry-run --volume-type io1 --size 20 --iops 1000 \--availability-zoneAn error occurred (DryRunOperation) when calling the CreateVolume operation:us-east-1aRequest would have succeeded, but DryRun flag is set.
Create the io1-50 storage class.
Download the io1-50.yaml resource file at https://raw.githubusercontent.com/RedHatTraining/DO12X-apps/main/ROSA/configure-storage/io1-50.yaml.
Review the io1-50.yaml file.
You do not have to change its contents.
The storage class uses the ebs.csi.aws.com provisioner to request EBS volumes of the io1 type, with a maximum of 50 I/Os per second per GiB.
For example, for a 4 GiB volume, the maximum IOPS is 200 (4 x 50).
--- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: io1-50provisioner: ebs.csi.aws.comvolumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: csi.storage.k8s.io/fstype: ext4type: io1iopsPerGB: "50"encrypted: "true"
Use the oc apply command to create the storage class resource from the io1-50.yaml file.
$ oc apply -f io1-50.yaml
storageclass.storage.k8s.io/io1-50 createdVerify that the storage class is available.
$ oc get sc io1-50
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ...
io1-50 ebs.csi.aws.com Delete WaitForFirstConsumer ...Create the configure-storage project, and then deploy the MariaDB database from the mariadb.yaml resource file.
The persistent volume claim (PVC) resource in that file requests a volume from the io1-50 storage class.
Use the oc new-project command to create the configure-storage project.
$ oc new-project configure-storage
Now using project "configure-storage" on server "https://api.do120-cluster.jf96.p1.openshiftapps.com:6443".
...output omitted...Download the mariadb.yaml resource file at https://raw.githubusercontent.com/RedHatTraining/DO12X-apps/main/ROSA/configure-storage/mariadb.yaml.
Review the mariadb.yaml file.
You do not have to change its contents.
The file declares the mariadb PVC resource that uses the io1-50 storage class, and requests 4 GiB of space.
The resulting volume, named mariadb-data, is mounted in the /var/lib/mysql/data directory inside the container.
---
apiVersion: v1
kind: List
metadata: {}
items:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: mariadb
name: mariadb
spec:
storageClassName: io1-50
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
...output omitted...
spec:
volumes:
- name: mariadb-data
persistentVolumeClaim:
claimName: mariadb
containers:
- name: mariadb
image: registry.redhat.io/rhel9/mariadb-105
volumeMounts:
- mountPath: /var/lib/mysql/data
name: mariadb-data
ports:
- containerPort: 3306
...output omitted...Use the oc apply command to deploy the MariaDB server.
$ oc apply -f mariadb.yaml
persistentvolumeclaim/mariadb created
secret/mariadb created
deployment.apps/mariadb created
service/mariadb createdVerify that OpenShift provisions an EBS volume for the MariaDB workload.
Verify that OpenShift binds the mariadb PVC to a persistent volume.
You might have to rerun the command several times for the command to report a Bound status.
$oc get pvcNAME STATUS VOLUME CAPACITY ... mariadbBoundpvc-f368c5d5-1a6a-4b13-8a39-163987e6d5d9 4Gi ...
The volume name is different on your system.
Use the aws ec2 describe-volumes command to verify that an io1 Amazon EBS volume exists for the mariadb PVC.
In the EBS volumes for OpenShift, the kubernetes.io/created-for/pvc/name tag is set to the name of the PVC.
The --filter option filters the EBS volumes by that tag.
The --query option limits the output to the volume type.
The command output confirms that the type of the EBS volume for the mariadb PVC is io1.
$aws ec2 describe-volumes \--filters "Name=tag:kubernetes.io/created-for/pvc/name,Values=mariadb" \--query "Volumes[0].VolumeType""io1"
Retrieve the name of the pod that runs the MariaDB server.
$oc get podsNAME READY STATUS RESTARTS AGEmariadb-7cd894c6-vhqpw1/1 Running 0 30m
The pod name is different on your system.
Use the df command inside the container to verify that the /var/lib/mysql/data directory uses the 4 GiB volume.
In the following command, replace the name of the pod by the name that you retrieved in the preceding step.
The file system is smaller than 4 GiB, to account for the file system overhead.
$oc rsh mariadb-Filesystem Size Used Avail Use% Mounted on /dev/nvme1n17cd894c6-vhqpwdf -h /var/lib/mysql/data3.9G119M 3.8G 4%/var/lib/mysql/data
Clean up your work by deleting the configure-storage project.
$ oc delete project configure-storage
project.project.openshift.io "configure-storage" deletedClean up the storage class.
$ oc delete sc io1-50
storageclass.storage.k8s.io "io1-50" deletedDo not delete your ROSA cluster, because later exercises use it.