Bookmark this page

Chapter 3.  Configure Kubernetes Networking for Virtual Machines

Abstract

Goal

Configure standard Kubernetes network objects and external access for VMs and virtual machine-backed applications.

Objectives
  • Describe VM communication on the Kubernetes SDN, create a ClusterIP service for a VM, and create a network policy for virtual machines.

  • Configure external access for virtual machines.

Sections
  • Kubernetes Networking Objects (and Guided Exercise)

  • Configure External Access to Virtual Machines (and Guided Exercise)

Lab
  • Configure Kubernetes Networking for Virtual Machines

Kubernetes Networking Objects

Objectives

  • Describe VM communication on the Kubernetes SDN, create a ClusterIP service for a VM, and create a network policy for virtual machines.

Review the Kubernetes SDN

Kubernetes automatically assigns an IP address to every pod. Pods can communicate with each other even if they run on different cluster nodes or belong to different Kubernetes namespaces.

Kubernetes implements this infrastructure with the use of a Software-Defined Network (SDN), which enables Kubernetes to control the network traffic and the network resources programmatically.

Introduction to the Cluster Network Operator

Red Hat OpenShift Container Platform (RHOCP) uses the Cluster Network Operator (CNO) for managing the SDN. The operator calls a plug-in that adheres to the Container Network Interface (CNI) specification to configure container network interfaces.

RHOCP includes some plug-in providers such as OpenShift SDN, OVN-Kubernetes, and Kuryr. The OVN-Kubernetes provider, which runs the Open vSwitch (OVS) plug-in on each node, is selected by default during the installation process.

Expose Pods and Virtual Machines by Using Services

The following schema shows that the Kubernetes pod SDN connects all the pods to a shared network.

Figure 3.1: Kubernetes pod SDN

Pods are constantly created and deleted across the nodes in the cluster. For example, when you deploy a new version of your application, Kubernetes destroys the old pods and deploys new ones. When a node goes into maintenance, Kubernetes destroys its pods, and starts new ones on the remaining nodes. Because Kubernetes assigns a different IP address each time that it creates a pod, pods are not easily addressable.

You can use Kubernetes services to provide a single and unique IP address for other pods to use, independently of where the pods are running.

Labels are selectors inside services that indicate which pods receive the traffic through the service. Kubernetes adds each pod that matches these selectors to the service resource as an endpoint. When pods are created and destroyed, the service automatically updates the endpoints, and load balances client requests across member pods.

Kubernetes uses one subnet for pods and one subnet for services, and when you address the service IP, Kubernetes forwards the traffic transparently to the pods.

The following diagram shows three pods that run the API of an application. The pods are not all running on the same node. The service1 service balances the load between the pods, and the service2 service forwards the requests to a VM.

Figure 3.2: Pod and service subnets

You can configure during the installation the address range of each network. Run the oc get network/cluster -o yaml command to list the ranges that your cluster is using.

[user@host ~]$ oc get network/cluster -o yaml
apiVersion: config.openshift.io/v1
kind: Network
metadata:
...output omitted...
spec:
  clusterNetwork:
  - cidr: 10.8.0.0/14
    hostPrefix: 23
  externalIP:
     policy: {}
  networkType: OVNKubernetes
  serviceNetwork:
  - 172.30.0.0/16
...output omitted...

Kubernetes provides several types of services to expose the VM outside the cluster, such as the ClusterIP type, which assigns an internal IP address to the service, but which is accessible only from inside the cluster. Red Hat OpenShift provides a different mechanism for that purpose, which is described elsewhere in this course.

Address a Service by its DNS Record

The internal DNS service provides queries for applications that are deployed in Kubernetes, to find the IP address of a service.

The DNS Operator deploys and runs a DNS server, which monitors the services to automatically create and update the DNS records. The DNS Operator manages the svc.cluster.local domain name for services, and creates records in the servicename.namespace.svc.cluster.local format. The operator automatically creates the /etc/resolv.conf configuration file inside the pods so that name resolution is immediately available without further configuration.

The following example retrieves the IP address of the back-end service in prod2 namespace by querying the backend.prod2.svc.cluster.local record.

[user@host ~]# getent hosts backend.prod2.svc.cluster.local
172.30.123.204  backend.prod2.svc.cluster.local

Create Services for Virtual Machines

VMs in Kubernetes run inside virt-launcher pods. These pods get an IP address on the pod SDN, which you can use to create a Kubernetes service to access them through a fixed IP address, which is different from the IP address inside the VM.

The virt-launcher process that runs inside the pod runs a Dynamic Host Configuration Protocol (DHCP) server, which provides an IP address and the DNS configuration to the VM. The virt-launcher pod redirects the inbound traffic to the VM and routes the outbound traffic to its destination.

You can create a Kubernetes service to expose a VM inside the cluster. After you expose the VM, pods and other VMs inside the cluster can access the application that the VM hosts.

Prepare a Label for a Service

To expose a VM, you must add a label to the VirtualMachine resource, and create a service where the specified label is added to the VM.

To add a label to a VM from web console, navigate to VirtualizationVirtualMachines, select the VM, and then navigate to the YAML tab.

Figure 3.3: YAML manifest of a VM

The following VirtualMachine resource definition shows the tier: backend label that a Kubernetes service uses to select the virt-launcher pod.

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
...output omitted...
  labels:
    app: backendvm
...output omitted...
  name: backendvm
spec:
  dataVolumeTemplates:
...output omitted...
  template:
    metadata:
      creationTimestamp: null
      labels:
        tier: backend
...output omitted...

Note

Add the label to the labels section in the .spec.template.metadata.labels path. This change ensures that the label is set to the virt-launcher pod.

The resource definition includes several labels sections for different parts of the VM configuration. Kubernetes does not use these other labels to identify pods for services.

To add a label from the command line, use the oc edit vm vmname command to edit the virtual machine, and save the changes.

[user@host ~]$ oc edit vm backup
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
_...output omitted...
  labels:
    app: backendvm
_...output omitted...
  name: backendvm
spec:
  dataVolumeTemplates:
_...output omitted...
  template:
    metadata:
      creationTimestamp: null
      labels:
        tier: backend
_...output omitted...

When you edit a VirtualMachine resource manifest, Kubernetes does not automatically propagate your changes to the VirtualMachineInstance and virt-launcher pod resources. You can restart the VM to re-create the resources and apply the latest changes. Kubernetes re-creates the virtual machine instance (VMI) and the virt-launcher pod resources with the latest details from the VirtualMachine resource.

To restart a VM from the OpenShift web console, navigate to VirtualizationVirtualMachines, select the VM, and then click ActionsRestart.

Figure 3.4: Restart a VM

You can use the virtctl client to restart a VM from the command line. The following example uses the virtctl restart vmname command to restart the backup VM.

[user@host ~]$ virtctl restart backup
VM backup was scheduled to restart

Use the oc get vm command to confirm that the VM is running:

[user@host ~]$ oc get vm
NAME	AGE    PHASE	  IP	        NODENAME     READY
backup  23s    Running    10.11.0.22    worker02     True

You can also manually set the tier: backend label to the virt-launcher pod. From the command line, use the oc label pod command:

[user@host ~]$ oc get pods
NAME                            READY   STATUS    RESTARTS   AGE
virt-launcher-backendvm-q4n2k   1/1     Running   0          41m
[user@host ~]$ oc label pod virt-launcher-backendvm-q4n2k tier=backend
pod/virt-launcher-backendvm-q4n2k labeled

To manually set the tier: backend label from the web console, navigate to WorkloadsPods, select the virt-launcher pod, and then navigate to the Details tab.

Figure 3.5: Edit the labels of the virt-launcher pod

In the Labels section, click Edit and then add the label.

Figure 3.6: Add a label to the virt-launcher pod

Even though you set the label to the virt-launcher pod, you must still define the same label at the VirtualMachine resource level.

When you restart a VM, Kubernetes destroys the VMI and the virt-launcher pod resources and then re-creates them from the VirtualMachine resource. If you do not also associate the label to the VirtualMachine resource, then the label that you set to the virt-launcher pod is lost.

Configure a Service for a Virtual Machine

To create a service from the web console, navigate to NetworkingServices, click Create Service, and then use the YAML editor to declare the service.

Figure 3.7: Create a service
Figure 3.8: YAML manifest of a service

After you create the service, the web console displays the IP address.

You can use the following manifest to create a service from the command line:

apiVersion: v1
kind: Service
metadata:
  name: backend  1
  namespace: prod2  2
spec:
  type: ClusterIP
  selector:
    tier: backend  3
  ports:
    - protocol: TCP  4
      port: 80
      targetPort: 8080

1

The name of the service that you create. The DNS Operator creates a record for that service name.

2

The namespace that hosts the VM.

3

The label that matches the label that you define in the VirtualMachine resource.

4

The service listens on port 80/TCP and forwards the requests to the back-end VM on port 8080. From the command line, use the oc get svc command:

Use the oc create -f service_file.yaml command to create a service from a YAML manifest:

[user@host ~]$ oc create -f service_file.yaml
service/backend created

Confirm the creation of the service and notice the IP address that is assigned:

[user@host ~]$ oc get svc
NAME      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
backend   ClusterIP   172.30.123.204   <none>        80/TCP     2m57s

For each pod that matches the selector, Kubernetes automatically creates an endpoint resource. To confirm that the service points to the VM, you can compare the IP address in the endpoint resource with the IP address in the VirtualMachineInstance resource:

[user@host ~]$ oc get endpoints
NAME      ENDPOINTS           AGE
backend   10.128.3.135:8080   3m13s
[user@host ~]$ oc get vmi
NAME        AGE   PHASE     IP              NODENAME   READY
backendvm   13m   Running   10.128.3.135    prodnode1  True

After you create the service, the DNS Operator adds the backend.prod2.svc.cluster.local record with the 172.30.123.204 address. You can then access the application that is running on the VM from another pod or another VM by using the backend.prod2.svc.cluster.local DNS name. Because the /etc/resolv.conf file that the DNS Operator deploys on the pods defines the svc.cluster.local and prod2.svc.cluster.local search domains, you can also use the backend.prod2 or backend short names to access the application.

The following example starts a temporary test pod and performs DNS queries for the service name:

[user@host ~]$ oc run mytestnet -it --rm --image=rhel8/toolbox
If you don't see a command prompt, try pressing enter.
[user@mytestnet /]# getent hosts backend.prod2.svc.cluster.local
172.30.123.204  backend.prod2.svc.cluster.local
[user@mytestnet /]# getent hosts backend.prod2
172.30.123.204  backend.prod2.svc.cluster.local
[user@mytestnet /]# getent hosts backend
172.30.123.204  backend.prod2.svc.cluster.local

Configure Network Policies for Services

By default, all pods in a namespace are accessible from any pods in any other namespace. For example, a pod or a VM that runs in a namespace can connect to a pod or a VM that runs in another namespace. However, to restrict connections to between pods that run only in the same namespace, you must establish stricter rules.

With Kubernetes network policies, you can configure isolation policies for individual pods. You can use labels to apply network policies to pods and namespaces.

From the OpenShift web console, navigate to AdministrationNamespaces, select the namespace, and then navigate to the Details tab.

Figure 3.9: Edit the labels of a namespace

In the Labels section, click Edit and then add the label.

Figure 3.10: Add a label to a namespace

To add a label to a namespace from the command line, use the oc label namespace command. The following command adds the name=client-ns label to the prod-front namespace:

[user@host ~]$ oc label namespace prod-front name=client-ns

The following example define a network policy that applies to all pods with the tier=backend label in the current namespace. The policy enables ingress traffic to port 8080 from pods and VMs whose label is tier=front in the namespace with the name=client-ns label.

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: network-1-policy
spec:
  podSelector:  1
    matchLabels:
      tier: backend

  ingress:  2
  - from:  3
    - namespaceSelector: 4
        matchLabels:
          name: client-ns
      podSelector:
        matchLabels:
          tier: front
    - namespaceSelect: 5
        matchLabels:
          name: server-ns
    ports:  6
    - port: 8080
      protocol: TCP

1

The top-level podSelector field is required and defines which pods and VMs in the current namespace the network policy applies to. If the podSelector field is empty, then the policy applies to all the pods and VMs in the current namespace.

2

The ingress field lists the ingress traffic rules.

3

The from field lists the allowed sources. The incoming traffic is allowed if any of the rules match (logical OR).

4

The first rule allows traffic from namespaces with the name=client-ns label and from pods and VMs with the tier=front label.

5

The second rule enables incoming traffic for all pods and VMs that are in the namespaces with the name=server-ns label.

6

The ports field lists the destination ports that allow traffic to reach the selected pods.

To create a network policy from the web console, navigate to NetworkingNetworkPolicies, click Create Network Policy, and complete the form.

Figure 3.11: Create a network policy
Figure 3.12: Form view of a network policy

You can also click YAML view to use the YAML editor to create the resource.

Figure 3.13: Manifest file of a network policy

References

For more information, refer to the Cluster Network Operator in OpenShift Container Platform, DNS Operator in OpenShift Container Platform, and Network Policy sections in the Red Hat OpenShift Container Platform Networking guide at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/networking/index

For more information about VM networking, refer to the Virtual Machine Networking section in the Red Hat OpenShift Container Platform Virtualization guide at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/virtualization/index#virtual-machine-networking

For more information about service resources, refer to the Service section in the Kubernetes documentation at https://kubernetes.io/docs/concepts/services-networking/service/

Revision: do316-4.14-d8a6b80