Bookmark this page

Connecting Virtual Machines to External Storage

Objectives

  • Connect a VM to external storage by using Multus.

Prepare External Storage

When you create a PVC to request a volume for your application, Kubernetes prepares a PV and then binds it to the PVC.

With dynamic provisioning, Kubernetes creates persistent volumes on demand. It uses a provisioner to control the back-end storage.

Not all storage types have a provisioner. In some organizations, dynamic provisioning is not allowed because the back-end storage is managed manually or is used by applications that Kubernetes does not control.

In those environments, Kubernetes uses static provisioning. Your storage administrators prepare several volumes with the back-end storage, and then your Kubernetes administrators declare these volumes in PV resources. When you create the PVC, Kubernetes binds one available PV to the claim.

Connect a Virtual Machines to External Storage

You can use one of these options to connect VMs to external storage:

  • Connect the storage from inside the VM.

  • Connect the storage through a Kubernetes PV and then attach the volume as a VM disk.

The first option is to connect the storage directly from inside the VM by using the VM operating system tools. You can choose this solution for storage that you access through the network, such as Server Message Block (SMB), Network File System (NFS), or Internet Small Computer Systems Interface (iSCSI).

If you access the back-end storage through a dedicated network, then your cluster administrators can use the Kubernetes NMState Operator to configure that network on the cluster nodes. You can then use the Multus plug-in to add a network interface on your VM that is connected to the storage network. The Kubernetes NMState Operator and Multus are explained in more detail elsewhere in this course.

Another option is to prepare a static PV for the external storage, create a PVC that binds to the PV, and then attach that PVC as a disk inside the VM. As in the first option, your cluster administrators can use the Kubernetes NMState Operator to configure the cluster nodes for network storage that you access through a dedicated network.

Create a Persistent Volume

A PV represents a piece of storage, which provides all the needed details for the cluster nodes to connect to that storage.

The following example shows the resource file for a PV that describes the connection to an iSCSI target:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: iscsi-pv
spec:
  capacity:
    storage: 50Gi  1
  accessModes:
    - ReadWriteOnce  2
  volumeMode: Block  3
  claimRef:  4
    name: websrv1-staticimgs
    namespace: vm-project
  iscsi:  5
     targetPortal: 192.168.51.40:3260
     iqn: iqn.1986-03.com.ibm:2145.disk1
     lun: 0
     initiatorName: iqn.1994-05.com.redhat:openshift-nodes

1

The volume size should match the size of the device that your storage administrators prepared.

2

The available access modes depend on the storage type. For iSCSI, only ReadWriteOnce and ReadOnlyMany are enabled. Because VM live migration requires the ReadWriteMany access mode, live migration is not available when using iSCSI PVs.

3

For VM disks, Red Hat recommends using the Block volume mode. For iSCSI PVs, you can choose between Block and Filesystem modes. Some other PV types, such as NFS, support only the Filesystem mode.

If you set the mode to Filesystem, then Kubernetes automatically formats the device. If you use that volume for a VM disk, then Red Hat OpenShift Virtualization creates a disk image file in the file system.

4

By default, Kubernetes does not reserve PVs. If a developer creates a PVC before you, then Kubernetes might bind it to your PV. For Kubernetes to reserve a PV for a specific PVC, provide the details of that PVC in the claimRef section.

5

The iscsi section provides the details of the remote iSCSI target. Your storage administrators must provide you with that configuration.

The following example shows the resource file for a PV that describes the connection to an NFS share:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteMany  1
  volumeMode: Filesystem  2
  claimRef:
    name: websrv1-logs
    namespace: vm-project
  nfs:  3
    path: /exports-ocp/vm135
    server: 10.20.42.42

1

NFS PVs support the ReadWriteMany access mode. That mode enables VM live migration.

2

For NFS PVs, Kubernetes supports only the Filesystem mode. Because the default value for the volumeMode parameter is Filesystem, you could omit the parameter. If you use the PV for a VM disk, then OpenShift Virtualization creates a disk image file in the NFS share.

3

The nfs section provides the NFS share connection details. Your storage administrators must provide you with that configuration.

The following example shows the resource file for a PV that describes the connection to a Fibre Channel volume. Using Fibre Channel implies that the cluster nodes have host bus adapters (HBAs) that are connected to a Fibre Channel storage array.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: fc-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  volumeMode: Block
  claimRef:
    name: dbsrv1-binlog
    namespace: vm-project
  fc:
    targetWWNs:
      - "50060e801049cfd1"
    lun: 0

PVs are global objects and are not tied to a project. Only cluster administrators can create PVs.

To create a PV from a resource file, run the oc apply -f resourcefile.yaml command as a cluster administrator.

You can also use the OpenShift web console to create PVs. Log in to the web console as an administrator, navigate to StoragePersistentVolumes, click Create PersistentVolume, and then write the resource file to the YAML editor. Click Create.

Create a Persistent Volume Claim

If you created a PV that contains a claimRef section, then you must use the corresponding name and namespace for your PVC.

The following example shows the resource file for a PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: websrv1-staticimgs
  namespace: vm-project
spec:
  resources:
    requests:
      storage: 50Gi
  accessModes:
    - ReadWriteOnce
  volumeMode: Block

You do not need cluster administrator privileges to create PVCs.

To create a PVC from a resource file, run the oc apply -f resourcefile.yaml command.

You can also use the OpenShift web console to create PVCs. Navigate to StoragePersistentVolumeClaims, click Create PersistentVolumeClaimWith Form, and then complete the form. Ensure that you use the correct name, access mode, size, and volume mode for the PVC so that Kubernetes binds the PV that you prepared. Keep the default value for the StorageClass parameter. Kubernetes does not use that parameter when you explicitly configure the bind. Click Create.

Note

Kubernetes does not delete the static PV when you delete the PVC. If you no longer need the PV, then ask your cluster administrator to delete it. To create a PVC that reuses the PV, your cluster administrator must release the PV first. To release the PV, your cluster administrator can edit the resource and remove the uid parameter from the claimRef section:

...output omitted...
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: websrv1-staticimgs
    namespace: vm-project
    resourceVersion: "325655"
    # uid: d9c1a805-f1e8-4370-8d6e-c450dc9c3ef3
...output omitted...

Attach a Persistent Volume to a Virtual Machine

When you attach a PVC as a VM disk, OpenShift Virtualization redirects all the disk read/write operations to the underlying volume on the node. The node then forwards the operations to the back-end storage.

Inject a Disk Image into the Volume

After you create the PVC, the associated volume is empty. If you attach the volume as a VM disk, then you get an empty disk that you can partition, format, and mount.

However, if you have a disk image in the IMG, ISO, or QCOW2 formats, then you can prepopulate the volume.

Use the virtctl command to inject a disk image into the PVC:

[user@host ~]$ virtctl image-upload pvc websrv1-staticimgs \
  --image-path=./webimgfs.qcow2 --no-create
Using existing PVC vm-project/websrv1-staticimgs
Waiting for PVC websrv1-staticimgs upload pod to be ready...
Pod now ready
Uploading data to https://cdi-uploadproxy-openshift-cnv.apps.ocp4.example.com

 249.88 MiB / 249.88 MiB [=================================================================] 100.00% 1s

Uploading data completed successfully, waiting for processing to complete, you can hit ctrl-c without interrupting the progress
Processing completed successfully
Uploading ./webimgfs.qcow2 completed successfully

With the OpenShift web console, you inject the disk image at the same time that you create the PVC. Navigate to StoragePersistentVolumeClaims, click Create PersistentVolumeClaimWith Data upload form, and then complete the form.

Figure 5.7: Create a PVC and upload a disk image

Attach the External Volume as a Disk

Use the OpenShift web console to attach the PVC to a VM. Navigate to VirtualizationVirtualMachines, select the VM, and then navigate to the ConfigurationDisks tab. Click Add disk and complete the form:

  • Enter a name for the disk in the Name field.

  • Select Use an existing PVC in the Source field.

  • Select your PVC in the PVC name field.

Click Save to attach the disk.

Remember that you must stop the VM to use the virtio interface.

References

Knowledgebase: "How to Manually Reclaim and Reuse OpenShift Persistent Volumes That Are Released"

For more information about the access modes in the available per volume plug-in, refer to the Access Modes section in the Red Hat OpenShift Container Platform Storage guide at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/storage/index#pv-access-modes_understanding-persistent-storage

For more information about injecting a disk image into a volume, refer to the Uploading Local Disk Images by Using the Web Console section in the Red Hat OpenShift Container Platform Virtualization guide at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/virtualization/index#virt-uploading-local-disk-images-web

Kubernetes List of Provisioners

Kubernetes Persistent Volume Reference

Revision: do316-4.14-d8a6b80