Bookmark this page

Chapter 4.  Connecting Virtual Machines to External Networks

Abstract

Goal

Configure node networking to connect virtual machines and nodes to networks that are outside the cluster.

Objectives
  • Describe the Multus CNI plug-in and use cases.

  • Configure multihomed nodes and virtual machines using the NMstate operator and Multus.

Sections
  • About Multus (and Quiz)

  • Configuring Multihomed Nodes and Virtual Machines (and Guided Exercise)

Lab
  • Connecting Virtual Machines to External Networks

About Multus

Objectives

  • Describe the Multus CNI plug-in and use cases.

Kubernetes Networking

The Kubernetes network model implements an IP address for every pod in the cluster so that links between pods and mapping of container ports to node ports are no longer necessary. The Kubernetes network model imposes these requirements on any networking implementation:

  • Pods in a cluster can communicate without network address translation (NAT).

  • Pods and agents (system processes) can communicate when they are on the same node.

Kubernetes assigns IP addresses at the pod level, which means that all containers within a pod can reach the localhost address on each other's ports.

Pods are unaware of the existence of node ports. However, you can request node ports that forward to a pod.

See the references section for more information about the Kubernetes network model.

Multus in Red Hat OpenShift Virtualization

Container runtimes can implement the Kubernetes network model in many ways, including through custom resources (CR) and Container Network Interface (CNI) plug-ins. Similar to a wrapper, the Multus CNI plug-in calls other CNI plug-ins for advanced networking functions, such as attaching multiple network interfaces to pods in an OpenShift cluster.

RHOCP uses the Multus CNI plug-in to chain other CNI plug-ins. With Multus CNI, you can configure additional networks alongside the default pod network during and after your OpenShift cluster installation. Attaching multiple networks to a VM is called multihoming.

Although you can add another network to pods, all pods must contain an eth0 interface that is attached to the default pod network to maintain connectivity across the cluster. You can view the interfaces that are attached to a pod by using the ip address command in that pod:

[user@host ~]$ oc exec -it pod_name -- ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
...output omitted...
2: eth0@if34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default
...output omitted...
3: net1@if35: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
...output omitted...

Additional network interfaces that use the Multus CNI plug-in follow the netN naming convention.

The following figure shows how additional network interfaces, which are attached by the Multus CNI plug-in, coexist within a pod:

Figure 4.1: Additional pod interfaces that are attached by Multus CNI

Multus CNI Use Cases

Attaching additional networks can be helpful in situations where network isolation is needed. The following use cases demonstrate some reasons for network isolation.

Performance

You can increase the performance of a network-intensive workload by isolating data and control planes to different node interfaces.

Security

You can isolate sensitive traffic onto a network plane that is managed specifically for security considerations, such as private data that must not be shared between tenants or customers.

External network

You can connect pods and VMs to networks outside the cluster.

Implementing Multus

OpenShift Virtualization uses Custom Resource Definitions (CRDs) and CNI plug-ins, such as Multus CNI, to provide advanced networking functions for your VMs. In RHOCP, the Cluster Network Addons Operator (CNAO) manages additional network configurations for VMs that are based on the Multus CNI plug-in. The Multus CNI plug-in is implemented through the NetworkAttachmentDefinition CR; the Network Plumbing Working Group leads the development.

A network attachment definition is a namespaced object that exposes existing layer-2 network devices, such as bridges and switches, to VMs and pods. Red Hat recommends using the Cluster Network Operator (CNO) to centralize managing additional networks for pods in clusters where the OpenShift Virtualization operator is not deployed. For VMs in OpenShift Virtualization, you do not need to edit the CNO to create a network attachment definition.

To attach additional network interfaces to a pod or a VM, you must create a network attachment definition that defines the CNI plug-in configuration to use for the additional interface.

RHOCP provides the following CNI plug-ins for the Multus CNI plug-in to chain:

bridge

Configure an additional bridge-based network to enable pods on the same host to communicate with each other and with the host. A Linux bridge is required to attach VMs to multiple networks.

host-device

Configure an additional host-device network to enable pods access to a physical Ethernet network device on the host system.

ipvlan

Configure an additional IPVLAN-based network to enable pods on a host to have different IP addresses with the same MAC address as the host's physical device.

macvlan

Configure an additional MACVLAN-based network to enable pods on a host to communicate with other hosts and their pods by using a physical network interface. Unlike an IPVLAN-based network, each pod that is attached to a MACVLAN additional network has a unique MAC address.

SR-IOV

Configure an additional SR-IOV network to enable pods to attach to a virtual function interface on SR-IOV-capable hardware on the host system.

These plug-ins require satisfying any limitations or requirements for the additional network. Therefore, installing each plug-in differs for configuration. Each plug-in requires a network attachment definition that defines the configuration. Before you configure additional networks for VMs, a Linux bridge must be configured and attached to your VM-workload nodes by applying a configuration manifest to the cluster.

Note

Node networking is explained in more detail later in this course.

You can create network attachment definitions for VMs from the command line with a YAML manifest or from the OpenShift web console.

To configure a network attachment definition from the OpenShift web console, navigate to NetworkingNetwork Attachment Definitions and then complete the form, or use the YAML editor to define the additional network. The following figure provides an example of a network attachment definition for a Linux bridge:

Figure 4.2: Configuring a Linux bridge network attachment definition

Note

If VLAN IDs are configured on your additional network, then you can specify the ID numbers in the VLAN Tag Number field. Otherwise, you can leave the field empty.

You can also configure the bridge plug-in with a YAML manifest. The following YAML definition shows the configuration of a bridge plug-in:

apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: bridge-dev  1
  annotations:
    k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/bridge-dev 2
spec:
  config: '{
    "cniVersion": "0.3.1",
    "name": "bridge-dev",  3
    "type": "cnv-bridge",  4
    "bridge": "dev-bridge", 5
    "macspoofchk": true, 6
    "vlan": 0 7
  }'

1

The name for the network attachment definition.

2

Optional: An annotation key-value pair to determine the node selection for VM scheduling and execution. In this example, the bridge-dev bridge is configured on some cluster nodes. VMs that are attached to this network are scheduled only on a node with the bridge-dev bridge.

3

The name for the configuration, which Red Hat recommends should match the name of the network attachment definition.

4

The name of the CNI plug-in to use. Change this field only if using a different CNI plug-in.

5

The name of the Linux bridge that is configured on the node.

6

Optional: When set to true, the MAC address of the pod or a guest interface cannot be changed, and only one MAC address exits the pod.

7

Optional: The VLAN tag of the network.

Apply the YAML manifest to your cluster with the oc create -f or oc apply -f commands.

Note

The network attachment definition must be in the same namespace as the virtual machine.

[user@host ~]$ oc create -f linux-bridge-dev.yml -n multus-test
networkattachmentdefinition.k8s.cni.cncf.io/bridge-dev created

You can verify the creation of the network attachment definition by executing the oc get net-attach-def -n namespace command.

[user@host ~]$ oc get net-attach-def -n multus-test
NAME          AGE
bridge-dev    54s

References

Demystifying Multus

For more information about the Kubernetes network model, refer to the Services, Load Balancing, and Networking documentation at https://kubernetes.io/docs/concepts/services-networking/#the-kubernetes-network-model

For more information about Multus and configuring additional network, refer to the Multiple Networks chapter in the Red Hat OpenShift Container Platform 4.14 Networking documentation at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/networking/index#multiple-networks

Revision: do316-4.14-d8a6b80