In your environment, NFS storage runs on slower, less expensive disks, and Red Hat OpenShift Data Foundation block storage runs on faster, more expensive backing devices. You need two new VMs for an upcoming project. The vm1 database machine requires faster storage, whereas vm2 is an internal-use file server that can use less expensive storage. In this exercise, you connect these two VMs to the appropriate storage service that meets the requirements.
Outcomes
Prepare new VM disks.
Attach disks to VMs.
Detach and delete disks from VMs.
As the student user on the workstation machine, use the lab command to prepare your system for this exercise.
This command ensures that the cluster API is reachable.
It also creates the storage-intro namespace and starts two virtual machines in that namespace: the vm1 VM, which hosts a MariaDB database, and the vm2 VM, which hosts a web server.
[student@workstation ~]$ lab start storage-intro
Instructions
Use the OpenShift web console to confirm that the two VMs are running.
Open a web browser and navigate to the web console URL.
Select and log in as the admin user with redhatocp as the password.
https://console-openshift-console.apps.ocp4.example.com
Navigate to → and select the storage-intro project.
Confirm that the vm1 and vm2 VMs are running.
Prepare a 5 GiB blank disk for the vm1 database server.
Attach the disk to the VM by using a virtio interface.
Use the ocs-external-storagecluster-ceph-rbd-virtualization storage class, which provides fast block storage.
Stop the vm1 VM so that you can use the virtio interface to attach the new disk.
Remember that the only available interface is scsi when the VM is running.
Select the vm1 VM and then click → .
Click to confirm the operation, if needed.
Wait for the machine to stop.
Navigate to the → tab and then click . Complete the form by using the following information and click to create and attach the disk to the VM.
| Field | Value |
|---|---|
| Use this disk as a boot source | Unset |
| Name |
dbdata
|
| Source | Blank (creates PVC) |
| Size | 5 GiB |
| Type | Disk |
| Interface | VirtIO |
| StorageClass |
ocs-external-storagecluster-ceph-rbd-virtualization
|
Apply optimized StorageProfile settings | Checked |
| Acces mode | Shared access (RWX) |
| Volume mode | Block |
| Enable preallocation | Unset |
If the field is set to scsi and virtio is not available, then click and start over.
The web interface takes a few seconds to detect that the VM stopped.
Click → to start the vm1 VM.
If a yellow admonition appears, then wait a few moments and try again to start the VM.
Log in to the VM console and confirm that a new 5 GiB device is available.
Navigate to the tab and then log in as the root user with redhat as the password.
Run the lsblk command to list the block devices.
Notice the 5 GiB vdc block device.
[root@vm1 ~]#lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 252:0 0 10G 0 disk ├─vda1 252:1 0 1M 0 part ├─vda2 252:2 0 100M 0 part /boot/efi └─vda3 252:3 0 9.9G 0 part / vdb 252:16 0 1M 0 diskvdc 252:32 0 5G 0 disk
Log out of the VM console.
[root@vm1 ~]# logoutPrepare a 10 GiB blank disk for the vm2 web server.
Attach the disk to the VM by using a virtio interface.
Use the nfs-storage storage class.
Navigate to → and then select the vm2 VM.
Stop the vm2 VM so that you can use the virtio interface to attach the new disk.
Click → to stop the vm2 VM.
Click to confirm the operation, if needed.
Wait for the machine to stop.
Navigate to the → tab and then click .
Complete the form by using the following information.
At the bottom of the form, notice that the volume mode is set to Filesystem by default.
Do not change that parameter.
Click to create and attach the disk to the VM.
| Field | Value |
|---|---|
| Use this disk as a boot source | Unset |
| Name |
staticdata
|
| Source | Blank (creates PVC) |
| Size | 10 GiB |
| Type | Disk |
| Interface | VirtIO |
| StorageClass |
nfs-storage
|
Apply optimized StorageProfile settings | Checked |
| Access mode | Shared access (RWX) |
| Volume mode | Filesystem |
| Enable preallocation | Unset |
Click → to start the vm2 VM.
Log in to the VM console and confirm that a new 10 GiB device is available.
Navigate to the tab and then log in as the root user with redhat as the password.
Run the lsblk command to list the block devices.
Notice the 10 GiB vdc block device.
Even though you create the volume in Filesystem mode, OpenShift Virtualization exposes the volume as a block device to the operating system inside the VM.
As a consequence, the lsblk command that you run inside the VM reports the volume as a new block device.
[root@vm2 ~]#lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 252:0 0 10G 0 disk ├─vda1 252:1 0 1M 0 part ├─vda2 252:2 0 100M 0 part /boot/efi └─vda3 252:3 0 9.9G 0 part / vdb 252:16 0 1M 0 diskvdc 252:32 0 10G 0 disk
Log out of the VM console.
[root@vm2 ~]# logoutDetach the additional disk from the vm2 VM.
Click → to stop the vm2 VM.
Navigate to the → .
At the end of the row of the staticdata disk, click the vertical ellipsis icon and then click .
Click in the confirmation window.
Click → to start the vm2 VM.
Log in to the VM console and confirm that the device no longer exists.
Navigate to the tab and then log in as the root user with redhat as the password.
Run the lsblk command to list the block devices.
Notice that the command no longer reports the vdc device.
[root@vm2 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 252:0 0 10G 0 disk
├─vda1 252:1 0 1M 0 part
├─vda2 252:2 0 100M 0 part /boot/efi
└─vda3 252:3 0 9.9G 0 part /
vdb 252:16 0 1M 0 diskLog out of the VM console.
[root@vm2 ~]# logout