Abstract
| Goal |
Configure node networking to connect virtual machines and nodes to networks that are outside the cluster. |
| Objectives |
|
| Sections |
|
| Lab |
|
The Kubernetes network model implements an IP address for every pod in the cluster so that links between pods and mapping of container ports to node ports are no longer necessary. The Kubernetes network model imposes these requirements on any networking implementation:
Pods in a cluster can communicate without network address translation (NAT).
Pods and agents (system processes) can communicate when they are on the same node.
Kubernetes assigns IP addresses at the pod level, which means that all containers within a pod can reach the localhost address on each other's ports.
Pods are unaware of the existence of node ports. However, you can request node ports that forward to a pod.
See the references section for more information about the Kubernetes network model.
Container runtimes can implement the Kubernetes network model in many ways, including through custom resources (CR) and Container Network Interface (CNI) plug-ins. Similar to a wrapper, the Multus CNI plug-in calls other CNI plug-ins for advanced networking functions, such as attaching multiple network interfaces to pods in an OpenShift cluster.
RHOCP uses the Multus CNI plug-in to chain other CNI plug-ins. With Multus CNI, you can configure additional networks alongside the default pod network during and after your OpenShift cluster installation. Attaching multiple networks to a VM is called multihoming.
Although you can add another network to pods, all pods must contain an eth0 interface that is attached to the default pod network to maintain connectivity across the cluster.
You can view the interfaces that are attached to a pod by using the ip address command in that pod:
[user@host ~]$oc exec -it1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 ...output omitted... 2:pod_name-- ip addresseth0@if34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default ...output omitted... 3:net1@if35: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default ...output omitted...
Additional network interfaces that use the Multus CNI plug-in follow the netN naming convention.
The following figure shows how additional network interfaces, which are attached by the Multus CNI plug-in, coexist within a pod:
Attaching additional networks can be helpful in situations where network isolation is needed. The following use cases demonstrate some reasons for network isolation.
You can increase the performance of a network-intensive workload by isolating data and control planes to different node interfaces.
You can isolate sensitive traffic onto a network plane that is managed specifically for security considerations, such as private data that must not be shared between tenants or customers.
You can connect pods and VMs to networks outside the cluster.
OpenShift Virtualization uses Custom Resource Definitions (CRDs) and CNI plug-ins, such as Multus CNI, to provide advanced networking functions for your VMs.
In RHOCP, the Cluster Network Addons Operator (CNAO) manages additional network configurations for VMs that are based on the Multus CNI plug-in.
The Multus CNI plug-in is implemented through the NetworkAttachmentDefinition CR; the Network Plumbing Working Group leads the development.
A network attachment definition is a namespaced object that exposes existing layer-2 network devices, such as bridges and switches, to VMs and pods. Red Hat recommends using the Cluster Network Operator (CNO) to centralize managing additional networks for pods in clusters where the OpenShift Virtualization operator is not deployed. For VMs in OpenShift Virtualization, you do not need to edit the CNO to create a network attachment definition.
To attach additional network interfaces to a pod or a VM, you must create a network attachment definition that defines the CNI plug-in configuration to use for the additional interface.
RHOCP provides the following CNI plug-ins for the Multus CNI plug-in to chain:
Configure an additional bridge-based network to enable pods on the same host to communicate with each other and with the host. A Linux bridge is required to attach VMs to multiple networks.
Configure an additional host-device network to enable pods access to a physical Ethernet network device on the host system.
Configure an additional IPVLAN-based network to enable pods on a host to have different IP addresses with the same MAC address as the host's physical device.
Configure an additional MACVLAN-based network to enable pods on a host to communicate with other hosts and their pods by using a physical network interface. Unlike an IPVLAN-based network, each pod that is attached to a MACVLAN additional network has a unique MAC address.
Configure an additional SR-IOV network to enable pods to attach to a virtual function interface on SR-IOV-capable hardware on the host system.
These plug-ins require satisfying any limitations or requirements for the additional network. Therefore, installing each plug-in differs for configuration. Each plug-in requires a network attachment definition that defines the configuration. Before you configure additional networks for VMs, a Linux bridge must be configured and attached to your VM-workload nodes by applying a configuration manifest to the cluster.
Node networking is explained in more detail later in this course.
You can create network attachment definitions for VMs from the command line with a YAML manifest or from the OpenShift web console.
To configure a network attachment definition from the OpenShift web console, navigate to → and then complete the form, or use the YAML editor to define the additional network. The following figure provides an example of a network attachment definition for a Linux bridge:
![]() |
If VLAN IDs are configured on your additional network, then you can specify the ID numbers in the field. Otherwise, you can leave the field empty.
You can also configure the bridge plug-in with a YAML manifest. The following YAML definition shows the configuration of a bridge plug-in:
apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: bridge-devannotations: k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/bridge-dev
spec: config: '{ "cniVersion": "0.3.1", "name": "bridge-dev",
"type": "cnv-bridge",
"bridge": "dev-bridge",
"macspoofchk": true,
"vlan": 0
}'
The name for the network attachment definition. | |
Optional: An annotation key-value pair to determine the node selection for VM scheduling and execution.
In this example, the | |
The name for the configuration, which Red Hat recommends should match the name of the network attachment definition. | |
The name of the CNI plug-in to use. Change this field only if using a different CNI plug-in. | |
The name of the Linux bridge that is configured on the node. | |
Optional: When set to | |
Optional: The VLAN tag of the network. |
Apply the YAML manifest to your cluster with the oc create -f or oc apply -f commands.
The network attachment definition must be in the same namespace as the virtual machine.
[user@host ~]$ oc create -f linux-bridge-dev.yml -n multus-test
networkattachmentdefinition.k8s.cni.cncf.io/bridge-dev createdYou can verify the creation of the network attachment definition by executing the oc get net-attach-def -n command.namespace
[user@host ~]$ oc get net-attach-def -n multus-test
NAME AGE
bridge-dev 54sFor more information about the Kubernetes network model, refer to the Services, Load Balancing, and Networking documentation at https://kubernetes.io/docs/concepts/services-networking/#the-kubernetes-network-model
For more information about Multus and configuring additional network, refer to the Multiple Networks chapter in the Red Hat OpenShift Container Platform 4.14 Networking documentation at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/networking/index#multiple-networks