Manage storage and disks for VMs in Red Hat OpenShift.
Outcomes
Attach and detach disks from VMs.
Create PVs and PVCs for external storage.
Attach external storage to VMs.
As the student user on the workstation machine, use the lab command to prepare your system for this exercise.
This command ensures that the cluster API is reachable.
The command creates the storage-review namespace and starts two virtual machines in that namespace: the vm1 and the vm2 VMs.
[student@workstation ~]$ lab start storage-review
Instructions
From a terminal on the workstation machine, use the oc command to log in to your Red Hat OpenShift cluster as admin with redhatocp as the password.
https://api.ocp4.example.com:6443
https://console-openshift-console.apps.ocp4.example.com
Open a web browser and log in to the OpenShift web console.
Confirm that the two VMs are running under the storage-review project.
From a terminal, log in to your OpenShift cluster as the admin user.
[student@workstation ~]$ oc login -u admin -p redhatocp \
https://api.ocp4.example.com:6443
Login Successful
...output omitted...Switch to the storage-review project.
[student@workstation ~]$oc project storage-reviewNow using project "storage-review" on server "https://api.ocp4.example.com:6443".
Identify the URL for the OpenShift web console.
[student@workstation ~]$ oc whoami --show-console
https://console-openshift-console.apps.ocp4.example.comOpen a web browser and navigate to the web console URL.
Select and log in as the admin user with redhatocp as the password when prompted.
Navigate to → and then select the storage-review project.
Confirm that the vm1 and vm2 VMs are running.
A disk named webroot is connected to the vm1 VM.
Detach the disk and then connect it to the vm2 VM.
Use the VirtIO interface when you connect the disk to the VM.
Ensure that you preserve the data when you detach the disk.
Ensure that the vm1 VM is running after you complete your work.
To verify the disk, log in to the console of the vm2 VM as root with redhat as the password, and then run the lsblk command.
The command output should list the new vdb disk.
Use the OpenShift web console to stop the vm1 VM.
Select the vm1 VM and then click → .
Click to confirm the operation, if needed.
Wait for the machine to stop.
Navigate to the → tab.
At the end of the row of the webroot disk, click the vertical ellipsis icon and then click .
Click to confirm.
Click → to start the VM.
Navigate to → , select the vm2 VM, and then click → .
Click to confirm the operation, if needed.
Wait for the machine to stop.
Navigate to the → tab and then click . Complete the form by using the following information and click confirm.
If the field is set to SCSI and VirtIO is not available, then click and start over. The web interface takes a few seconds to detect that the VM is stopped.
| Field | Value |
|---|---|
| Use this disk as a boot source | Unset |
| Name |
webroot
|
| Source | Use an existing PVC |
| PVC project |
storage-review
|
| PVC name |
webroot
|
| Type | Disk |
| Interface | VirtIO |
Click → to start the VM.
Log in to the VM console and confirm that a new 1 GiB device is available.
Navigate to the tab and then log in as the root user with redhat as the password.
Run the lsblk command to list the block devices.
Notice the 1 GiB vdb block device.
[root@vm2 ~]#lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 252:0 0 10G 0 disk ├─vda1 252:1 0 1M 0 part ├─vda2 252:2 0 100M 0 part /boot/efi └─vda3 252:3 0 9.9G 0 part /vdb 252:16 0 1G 0 disk
Log out of the VM console.
[root@vm2 ~]# logoutCreate the PV resource named nfs-pv, which declares an external NFS share.
Ensure that you reserve the PV for a PVC named weblog in the storage-review namespace.
You create the weblog PVC in a following step.
The lab command prepared the ~/DO316/labs/storage-review/nfspv.yaml resource file that you can use as a model.
To use that file, you must adapt it to the requirements.
From the terminal on the workstation machine, edit the ~/DO316/labs/storage-review/nfspv.yaml resource file.
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
volumeMode: Filesystem
claimRef:
name: weblog
namespace: storage-review
nfs:
path: /exports-ocp4/storage-review
server: 192.168.50.254Use the oc command to create the resource:
[student@workstation ~]$ oc apply -f ~/DO316/labs/storage-review/nfspv.yaml
persistentvolume/nfs-pv createdFrom the OpenShift web console, navigate to → , click , and then copy and paste the resource file into the YAML editor. Click to create the resource.
Run the oc get pv command to verify your work.
[student@workstation ~]$oc get pv nfs-pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM …nfs-pv5Gi RWX RetainAvailable storage-review/weblog…
Create the weblog PVC in the storage-review namespace with the following parameters:
| Field | Value |
|---|---|
| PVC name |
weblog
|
| Access mode | ReadWriteMany (RWX) |
| Size | 5 GiB |
| Volume mode | Filesystem |
The lab command prepared the ~/DO316/labs/storage-review/nfspvc.yaml resource file that you can use as a model.
To use that file, you must adapt it to the requirements.
Confirm that Kubernetes binds the weblog PVC to the PV named nfs-pv.
From the terminal, edit the ~/DO316/labs/storage-review/nfspvc.yaml resource file.
apiVersion: v1 kind: PersistentVolumeClaim metadata: name:weblognamespace:storage-reviewspec: resources: requests: storage: 5Gi accessModes: - ReadWriteMany volumeMode: Filesystem
Use the oc command to create the resource:
[student@workstation ~]$ oc apply -f ~/DO316/labs/storage-review/nfspvc.yaml
persistentvolumeclaim/weblog createdFrom the OpenShift web console, navigate to → , select the storage-review project, and click → .
Complete the form by using the following information, and click to create the resource.
| Field | Value |
|---|---|
| StorageClass |
nfs-storage
|
| PersistentVolumeClaim name |
weblog
|
| Access mode | Shared access (RWX) |
| Size | 5 GiB |
| Use label selectors to request storage | Unset |
| Volume mode | Filesystem |
Run the oc get pvc command to confirm that Kubernetes binds your PVC to the nfs-pv persistent volume.
[student@workstation ~]$oc get pvc weblogNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEweblog Bound nfs-pv5Gi RWX nfs-storage 1m
Attach the weblog PVC as a disk to the vm2 VM by using a VirtIO interface.
Ensure that the vm2 VM is running after you have completed your work.
To verify the disk, log in to the console of the vm2 VM as root with redhat as the password, and then run the lsblk command.
The command output should list a new 5 GiB disk named vdc.
Use the OpenShift web console to stop the vm2 VM.
Navigate to → , select the vm2 VM, and then click → .
Click to confirm the operation, if needed.
Wait for the machine to stop.
Navigate to the → tab and then click . Complete the form by using the following information and click to confirm.
If the field is set to SCSI and VirtIO is not available, then click and start over. The web interface takes a few seconds to detect that the VM is stopped.
| Field | Value |
|---|---|
| Use this disk as a boot source | Unset |
| Name |
weblog
|
| Source | Use an existing PVC |
| PVC project |
storage-review
|
| PVC name |
weblog
|
| Type | Disk |
| Interface | VirtIO |
Click → to start the VM and wait until it finishes booting.
Log in to the VM console and confirm that a new 5 GiB device is available.
Navigate to the tab and then log in as the root user with redhat as the password.
Run the lsblk command to list the block devices.
Notice the 5 GiB vdc block device.
[root@vm2 ~]#lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 252:0 0 10G 0 disk ├─vda1 252:1 0 1M 0 part ├─vda2 252:2 0 100M 0 part /boot/efi └─vda3 252:3 0 9.9G 0 part / vdb 252:16 0 1G 0 diskvdc 252:32 0 5G 0 disk
Log out of the VM console.
[root@vm2 ~]# logout