Abstract
| Goal |
Configure standard Kubernetes network objects and external access for VMs and virtual machine-backed applications. |
| Objectives |
|
| Sections |
|
| Lab |
|
Describe VM communication on the Kubernetes SDN, create a ClusterIP service for a VM, and create a network policy for virtual machines.
Kubernetes automatically assigns an IP address to every pod. Pods can communicate with each other even if they run on different cluster nodes or belong to different Kubernetes namespaces.
Kubernetes implements this infrastructure with the use of a Software-Defined Network (SDN), which enables Kubernetes to control the network traffic and the network resources programmatically.
Red Hat OpenShift Container Platform (RHOCP) uses the Cluster Network Operator (CNO) for managing the SDN. The operator calls a plug-in that adheres to the Container Network Interface (CNI) specification to configure container network interfaces.
RHOCP includes some plug-in providers such as OpenShift SDN, OVN-Kubernetes, and Kuryr. The OVN-Kubernetes provider, which runs the Open vSwitch (OVS) plug-in on each node, is selected by default during the installation process.
The following schema shows that the Kubernetes pod SDN connects all the pods to a shared network.
Pods are constantly created and deleted across the nodes in the cluster. For example, when you deploy a new version of your application, Kubernetes destroys the old pods and deploys new ones. When a node goes into maintenance, Kubernetes destroys its pods, and starts new ones on the remaining nodes. Because Kubernetes assigns a different IP address each time that it creates a pod, pods are not easily addressable.
You can use Kubernetes services to provide a single and unique IP address for other pods to use, independently of where the pods are running.
Labels are selectors inside services that indicate which pods receive the traffic through the service. Kubernetes adds each pod that matches these selectors to the service resource as an endpoint. When pods are created and destroyed, the service automatically updates the endpoints, and load balances client requests across member pods.
Kubernetes uses one subnet for pods and one subnet for services, and when you address the service IP, Kubernetes forwards the traffic transparently to the pods.
The following diagram shows three pods that run the API of an application.
The pods are not all running on the same node.
The service1 service balances the load between the pods, and the service2 service forwards the requests to a VM.
You can configure during the installation the address range of each network.
Run the oc get network/cluster -o yaml command to list the ranges that your cluster is using.
[user@host ~]$oc get network/cluster -o yamlapiVersion: config.openshift.io/v1 kind: Network metadata: ...output omitted... spec: clusterNetwork:- cidr: 10.8.0.0/14hostPrefix: 23 externalIP: policy: {} networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 ...output omitted...
Kubernetes provides several types of services to expose the VM outside the cluster, such as the ClusterIP type, which assigns an internal IP address to the service, but which is accessible only from inside the cluster.
Red Hat OpenShift provides a different mechanism for that purpose, which is described elsewhere in this course.
The internal DNS service provides queries for applications that are deployed in Kubernetes, to find the IP address of a service.
The DNS Operator deploys and runs a DNS server, which monitors the services to automatically create and update the DNS records.
The DNS Operator manages the svc.cluster.local domain name for services, and creates records in the servicename.namespace.svc.cluster.local format.
The operator automatically creates the /etc/resolv.conf configuration file inside the pods so that name resolution is immediately available without further configuration.
The following example retrieves the IP address of the back-end service in prod2 namespace by querying the backend.prod2.svc.cluster.local record.
[user@host ~]# getent hosts backend.prod2.svc.cluster.local
172.30.123.204 backend.prod2.svc.cluster.localVMs in Kubernetes run inside virt-launcher pods.
These pods get an IP address on the pod SDN, which you can use to create a Kubernetes service to access them through a fixed IP address, which is different from the IP address inside the VM.
The virt-launcher process that runs inside the pod runs a Dynamic Host Configuration Protocol (DHCP) server, which provides an IP address and the DNS configuration to the VM.
The virt-launcher pod redirects the inbound traffic to the VM and routes the outbound traffic to its destination.
You can create a Kubernetes service to expose a VM inside the cluster. After you expose the VM, pods and other VMs inside the cluster can access the application that the VM hosts.
To expose a VM, you must add a label to the VirtualMachine resource, and create a service where the specified label is added to the VM.
To add a label to a VM from web console, navigate to → , select the VM, and then navigate to the tab.
![]() |
The following VirtualMachine resource definition shows the tier: backend label that a Kubernetes service uses to select the virt-launcher pod.
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
...output omitted...
labels:
app: backendvm
...output omitted...
name: backendvm
spec:
dataVolumeTemplates:
...output omitted...
template:
metadata:
creationTimestamp: null
labels:
tier: backend
...output omitted...Add the label to the labels section in the .spec.template.metadata.labels path.
This change ensures that the label is set to the virt-launcher pod.
The resource definition includes several labels sections for different parts of the VM configuration.
Kubernetes does not use these other labels to identify pods for services.
To add a label from the command line, use the oc edit vm command to edit the virtual machine, and save the changes.vmname
[user@host ~]$oc edit vm backupapiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: _...output omitted... labels: app: backendvm _...output omitted... name: backendvm spec: dataVolumeTemplates: _...output omitted... template: metadata: creationTimestamp: null labels:tier: backend_...output omitted...
When you edit a VirtualMachine resource manifest, Kubernetes does not automatically propagate your changes to the VirtualMachineInstance and virt-launcher pod resources.
You can restart the VM to re-create the resources and apply the latest changes.
Kubernetes re-creates the virtual machine instance (VMI) and the virt-launcher pod resources with the latest details from the VirtualMachine resource.
To restart a VM from the OpenShift web console, navigate to → , select the VM, and then click → .
![]() |
You can use the virtctl client to restart a VM from the command line.
The following example uses the virtctl restart command to restart the vmnamebackup VM.
[user@host ~]$ virtctl restart backup
VM backup was scheduled to restartUse the oc get vm command to confirm that the VM is running:
[user@host ~]$ oc get vm
NAME AGE PHASE IP NODENAME READY
backup 23s Running 10.11.0.22 worker02 TrueYou can also manually set the tier: backend label to the virt-launcher pod.
From the command line, use the oc label pod command:
[user@host ~]$oc get podsNAME READY STATUS RESTARTS AGE virt-launcher-backendvm-q4n2k 1/1 Running 0 41m [user@host ~]$oc label pod virt-launcher-backendvm-q4n2k tier=backendpod/virt-launcher-backendvm-q4n2k labeled
To manually set the tier: backend label from the web console, navigate to → , select the virt-launcher pod, and then navigate to the tab.
![]() |
In the section, click and then add the label.
![]() |
Even though you set the label to the virt-launcher pod, you must still define the same label at the VirtualMachine resource level.
When you restart a VM, Kubernetes destroys the VMI and the virt-launcher pod resources and then re-creates them from the VirtualMachine resource.
If you do not also associate the label to the VirtualMachine resource, then the label that you set to the virt-launcher pod is lost.
To create a service from the web console, navigate to → , click , and then use the YAML editor to declare the service.
![]() |
![]() |
After you create the service, the web console displays the IP address.
You can use the following manifest to create a service from the command line:
apiVersion: v1 kind: Service metadata: name: backendnamespace: prod2
spec: type: ClusterIP selector: tier: backend
ports: - protocol: TCP
port: 80 targetPort: 8080
The name of the service that you create. The DNS Operator creates a record for that service name. | |
The namespace that hosts the VM. | |
The label that matches the label that you define in the | |
The service listens on port 80/TCP and forwards the requests to the back-end VM on port 8080.
From the command line, use the |
Use the oc create -f service_file.yaml command to create a service from a YAML manifest:
[user@host ~]$ oc create -f service_file.yaml
service/backend createdConfirm the creation of the service and notice the IP address that is assigned:
[user@host ~]$oc get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE backend ClusterIP172.30.123.204<none> 80/TCP 2m57s
For each pod that matches the selector, Kubernetes automatically creates an endpoint resource.
To confirm that the service points to the VM, you can compare the IP address in the endpoint resource with the IP address in the VirtualMachineInstance resource:
[user@host ~]$oc get endpointsNAME ENDPOINTS AGE backend10.128.3.135:8080 3m13s [user@host ~]$oc get vmiNAME AGE PHASE IP NODENAME READY backendvm 13m Running10.128.3.135prodnode1 True
After you create the service, the DNS Operator adds the backend.prod2.svc.cluster.local record with the 172.30.123.204 address.
You can then access the application that is running on the VM from another pod or another VM by using the backend.prod2.svc.cluster.local DNS name.
Because the /etc/resolv.conf file that the DNS Operator deploys on the pods defines the svc.cluster.local and prod2.svc.cluster.local search domains, you can also use the backend.prod2 or backend short names to access the application.
The following example starts a temporary test pod and performs DNS queries for the service name:
[user@host ~]$oc run mytestnet -it --rm --image=rhel8/toolboxIf you don't see a command prompt, try pressing enter. [user@mytestnet /]#getent hosts backend.prod2.svc.cluster.local172.30.123.204 backend.prod2.svc.cluster.local [user@mytestnet /]#getent hosts backend.prod2172.30.123.204 backend.prod2.svc.cluster.local [user@mytestnet /]#getent hosts backend172.30.123.204 backend.prod2.svc.cluster.local
By default, all pods in a namespace are accessible from any pods in any other namespace. For example, a pod or a VM that runs in a namespace can connect to a pod or a VM that runs in another namespace. However, to restrict connections to between pods that run only in the same namespace, you must establish stricter rules.
With Kubernetes network policies, you can configure isolation policies for individual pods. You can use labels to apply network policies to pods and namespaces.
From the OpenShift web console, navigate to → , select the namespace, and then navigate to the tab.
![]() |
In the section, click and then add the label.
![]() |
To add a label to a namespace from the command line, use the oc label namespace command.
The following command adds the name=client-ns label to the prod-front namespace:
[user@host ~]$ oc label namespace prod-front name=client-nsThe following example define a network policy that applies to all pods with the tier=backend label in the current namespace.
The policy enables ingress traffic to port 8080 from pods and VMs whose label is tier=front in the namespace with the name=client-ns label.
kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: network-1-policy spec: podSelector:matchLabels: tier: backend ingress:
- from:
- namespaceSelector:
matchLabels: name: client-ns podSelector: matchLabels: tier: front - namespaceSelect:
matchLabels: name: server-ns ports:
- port: 8080 protocol: TCP
The top-level | |
The | |
The | |
The first rule allows traffic from namespaces with the | |
The second rule enables incoming traffic for all pods and VMs that are in the namespaces with the | |
The |
To create a network policy from the web console, navigate to → , click , and complete the form.
![]() |
![]() |
You can also click to use the YAML editor to create the resource.
![]() |
For more information, refer to the Cluster Network Operator in OpenShift Container Platform, DNS Operator in OpenShift Container Platform, and Network Policy sections in the Red Hat OpenShift Container Platform Networking guide at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/networking/index
For more information about VM networking, refer to the Virtual Machine Networking section in the Red Hat OpenShift Container Platform Virtualization guide at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/virtualization/index#virtual-machine-networking
For more information about service resources, refer to the Service section in the Kubernetes documentation at https://kubernetes.io/docs/concepts/services-networking/service/