When you create a PVC to request a volume for your application, Kubernetes prepares a PV and then binds it to the PVC.
With dynamic provisioning, Kubernetes creates persistent volumes on demand. It uses a provisioner to control the back-end storage.
Not all storage types have a provisioner. In some organizations, dynamic provisioning is not allowed because the back-end storage is managed manually or is used by applications that Kubernetes does not control.
In those environments, Kubernetes uses static provisioning. Your storage administrators prepare several volumes with the back-end storage, and then your Kubernetes administrators declare these volumes in PV resources. When you create the PVC, Kubernetes binds one available PV to the claim.
You can use one of these options to connect VMs to external storage:
Connect the storage from inside the VM.
Connect the storage through a Kubernetes PV and then attach the volume as a VM disk.
The first option is to connect the storage directly from inside the VM by using the VM operating system tools. You can choose this solution for storage that you access through the network, such as Server Message Block (SMB), Network File System (NFS), or Internet Small Computer Systems Interface (iSCSI).
If you access the back-end storage through a dedicated network, then your cluster administrators can use the Kubernetes NMState Operator to configure that network on the cluster nodes. You can then use the Multus plug-in to add a network interface on your VM that is connected to the storage network. The Kubernetes NMState Operator and Multus are explained in more detail elsewhere in this course.
Another option is to prepare a static PV for the external storage, create a PVC that binds to the PV, and then attach that PVC as a disk inside the VM. As in the first option, your cluster administrators can use the Kubernetes NMState Operator to configure the cluster nodes for network storage that you access through a dedicated network.
A PV represents a piece of storage, which provides all the needed details for the cluster nodes to connect to that storage.
The following example shows the resource file for a PV that describes the connection to an iSCSI target:
apiVersion: v1
kind: PersistentVolume
metadata:
name: iscsi-pv
spec:
capacity:
storage: 50Gi
accessModes:
- ReadWriteOnce
volumeMode: Block
claimRef:
name: websrv1-staticimgs
namespace: vm-project
iscsi:
targetPortal: 192.168.51.40:3260
iqn: iqn.1986-03.com.ibm:2145.disk1
lun: 0
initiatorName: iqn.1994-05.com.redhat:openshift-nodesThe volume size should match the size of the device that your storage administrators prepared. | |
The available access modes depend on the storage type.
For iSCSI, only | |
For VM disks, Red Hat recommends using the If you set the mode to | |
By default, Kubernetes does not reserve PVs.
If a developer creates a PVC before you, then Kubernetes might bind it to your PV.
For Kubernetes to reserve a PV for a specific PVC, provide the details of that PVC in the | |
The |
The following example shows the resource file for a PV that describes the connection to an NFS share:
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteMany
volumeMode: Filesystem
claimRef:
name: websrv1-logs
namespace: vm-project
nfs:
path: /exports-ocp/vm135
server: 10.20.42.42NFS PVs support the | |
For NFS PVs, Kubernetes supports only the | |
The |
The following example shows the resource file for a PV that describes the connection to a Fibre Channel volume. Using Fibre Channel implies that the cluster nodes have host bus adapters (HBAs) that are connected to a Fibre Channel storage array.
apiVersion: v1
kind: PersistentVolume
metadata:
name: fc-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
volumeMode: Block
claimRef:
name: dbsrv1-binlog
namespace: vm-project
fc:
targetWWNs:
- "50060e801049cfd1"
lun: 0PVs are global objects and are not tied to a project. Only cluster administrators can create PVs.
To create a PV from a resource file, run the oc apply -f command as a cluster administrator.resourcefile.yaml
You can also use the OpenShift web console to create PVs. Log in to the web console as an administrator, navigate to → , click , and then write the resource file to the YAML editor. Click .
If you created a PV that contains a claimRef section, then you must use the corresponding name and namespace for your PVC.
The following example shows the resource file for a PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: websrv1-staticimgs
namespace: vm-project
spec:
resources:
requests:
storage: 50Gi
accessModes:
- ReadWriteOnce
volumeMode: BlockYou do not need cluster administrator privileges to create PVCs.
To create a PVC from a resource file, run the oc apply -f command.resourcefile.yaml
You can also use the OpenShift web console to create PVCs. Navigate to → , click → , and then complete the form. Ensure that you use the correct name, access mode, size, and volume mode for the PVC so that Kubernetes binds the PV that you prepared. Keep the default value for the parameter. Kubernetes does not use that parameter when you explicitly configure the bind. Click .
Kubernetes does not delete the static PV when you delete the PVC.
If you no longer need the PV, then ask your cluster administrator to delete it.
To create a PVC that reuses the PV, your cluster administrator must release the PV first.
To release the PV, your cluster administrator can edit the resource and remove the uid parameter from the claimRef section:
...output omitted...
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: websrv1-staticimgs
namespace: vm-project
resourceVersion: "325655"
# uid: d9c1a805-f1e8-4370-8d6e-c450dc9c3ef3
...output omitted...When you attach a PVC as a VM disk, OpenShift Virtualization redirects all the disk read/write operations to the underlying volume on the node. The node then forwards the operations to the back-end storage.
After you create the PVC, the associated volume is empty. If you attach the volume as a VM disk, then you get an empty disk that you can partition, format, and mount.
However, if you have a disk image in the IMG, ISO, or QCOW2 formats, then you can prepopulate the volume.
Use the virtctl command to inject a disk image into the PVC:
[user@host ~]$virtctl image-upload pvc websrv1-staticimgs \--image-path=./webimgfs.qcow2 --no-createUsing existing PVC vm-project/websrv1-staticimgs Waiting for PVC websrv1-staticimgs upload pod to be ready... Pod now ready Uploading data to https://cdi-uploadproxy-openshift-cnv.apps.ocp4.example.com 249.88 MiB / 249.88 MiB [=================================================================] 100.00% 1s Uploading data completed successfully, waiting for processing to complete, you can hit ctrl-c without interrupting the progress Processing completed successfully Uploading ./webimgfs.qcow2 completed successfully
With the OpenShift web console, you inject the disk image at the same time that you create the PVC. Navigate to → , click → , and then complete the form.

Use the OpenShift web console to attach the PVC to a VM. Navigate to → , select the VM, and then navigate to the → tab. Click and complete the form:
Enter a name for the disk in the field.
Select in the field.
Select your PVC in the field.
Click to attach the disk.
Remember that you must stop the VM to use the virtio interface.
Knowledgebase: "How to Manually Reclaim and Reuse OpenShift Persistent Volumes That Are Released"
For more information about the access modes in the available per volume plug-in, refer to the Access Modes section in the Red Hat OpenShift Container Platform Storage guide at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/storage/index#pv-access-modes_understanding-persistent-storage
For more information about injecting a disk image into a volume, refer to the Uploading Local Disk Images by Using the Web Console section in the Red Hat OpenShift Container Platform Virtualization guide at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/virtualization/index#virt-uploading-local-disk-images-web