Bookmark this page

Configuring Multihomed Nodes and Virtual Machines

Objectives

  • Configure multihomed nodes and virtual machines by using the NMstate operator and Multus.

The Kubernetes NMState Operator

The Kubernetes NMstate operator provides a centralized and declarative host network configuration tool in a Red Hat OpenShift cluster. The operator reports the node network configuration, validates configuration syntax, and applies network configuration changes without a node reboot. With the NMState operator, you can configure additional node network devices, such as Linux bridges, and use them in network attachment definitions.

As with other operators, you can install the Kubernetes NMstate operator by using either the OpenShift web console or the command line.

Consult the references section for installing an operator by using the OperatorHub or the oc command.

After the installation is complete, you must create an NMState instance with the nmstate name to deploy the NMState controller on all OpenShift nodes.

From the OpenShift web console, navigate to OperatorsInstalled Operators and open the Kubernetes NMState Operator page. In the NMState card, click Create instance.

Figure 4.3: Creating the NMState instance

The NMState operator provides three API resources for managing the network configuration:

  • NodeNetworkState

  • NodeNetworkConfigurationPolicy

  • NodeNetworkConfigurationEnactment

Node Network State

A node network state (NNS) resource exists on each cluster node and is periodically updated to include the state of the network for that node. To review the current network state for a particular node, use the oc get nns/node-name -o yaml command.

The following example shows the output of the oc get nns/node-name -o yaml command:

apiVersion: nmstate.io/v1beta1
kind: NodeNetworkState
metadata:
  name: worker01
...output omitted...
status:
  currentState:
    interfaces:
    - accept-all-mac-addresses: false
      controller: br-ex
...output omitted...
      ipv4:
        address:
        - ip: 192.168.50.13
          preferred-life-time: 437316606sec
          prefix-length: 24
          valid-life-time: 437316606sec
...output omitted...
        dhcp: true
        dhcp-send-hostname: true
        enabled: true
...output omitted...
      name: br-ex
      profile-name: ovs-if-br-ex
      state: up
      type: ovs-interface
...output omitted...

You can also review the node network configuration from the OpenShift web console on the NetworkingNodeNetworkState page.

Figure 4.4: Node network configuration with NodeNetworkState

Node Network Configuration Policy

A node network configuration policy (NNCP) describes the intended network configuration for OpenShift nodes. You can create and manage node network interfaces, such as declaring a Linux bridge, with a node network configuration policy. NMState can manage several interface types, such as Linux bridge, bonding, and Ethernet. You can also configure additional options, such as Spanning Tree Protocol (STP) and IPv4 connectivity with an NNCP.

You can manage an NNCP from the OpenShift web console on the NetworkingNodeNetworkConfigurationPolicy page.

Figure 4.5: Creating a node network configuration policy

By default, an NNCP resource is applied to all nodes in the cluster. However, you can specify which nodes to apply the policy to, such as only compute nodes, by including a node selector with the appropriate label in the NNCP.

The following example shows an NNCP that defines a Linux bridge on a node's ens4 network interface.

apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
  name: br1-policy 1
spec:
  nodeSelector:
    node-role.kubernetes.io/worker: "" 2
  desiredState:
    interfaces:
    - name: br1 3
      description: Linux bridge with ens4 as a port 4
      type: linux-bridge 5
      state: up
      ipv4:
        dhcp: true
        enabled: true
      bridge:
        options:
          stp:
            enabled: false
        port:
        - name: ens4 6

1

The name for the node network configuration policy.

2

The nodeSelector specifies that this policy applies to all compute nodes in the cluster. If the nodeSelector parameter is not included, then the policy applies to all nodes in the cluster.

3

The name parameter specifies the chosen name for the configured interface.

4

An optional description for the policy.

5

The type parameter specifies the type of connection to create.

6

The physical interface on the node to use in the defined Linux bridge.

You can list an NNCP from the command line with the oc get nncp command. To view the details of an existing NNCP, use the oc get nncp nncp-name -o yaml command.

To remove an interface from a node, you must modify or create an NNCP that designates the interface as absent under the state parameter. When setting an interface's state to absent, the network interface that is configured with the bridge or bonding interface is placed in the down state and connectivity is lost. To avoid losing connectivity, declare your intended configuration for the node interface in the same NNCP that removes the bridge or bonding interface.

Important

Setting an interface to absent or deleting an NNCP does not restore the previous configuration. A cluster administrator must define a policy with the previous configuration to restore settings.

Node Network Configuration Enactment

When you apply a node network configuration policy to a cluster, the NMState operator generates a NodeNetworkConfigurationEnactment (NNCE) object for each cluster node that the policy affects. The NNCE object reports the execution status of an applied policy and includes the defined settings that the policy enacts on each node. In the event of a failure, the node rolls back to its prior configuration, and the NNCE provides traceback data to troubleshoot the failure.

You can use the oc get nnce command to verify the status of a configuration policy and to confirm that it is successfully applied. To view more detailed information, including intended settings and traceback data, use the oc get nnce nnce-name -o yaml command.

On the OpenShift web console, the NNCE status is attached to the NNCP and is available on the NetworkingNodeNetworkConfigurationPolicy page.

Figure 4.6: Node network configuration enactments that are associated with a policy

You can click the node network state to get more information about the policy execution on each node, including error messages for troubleshooting.

Figure 4.7: Failing node network configuration enactment

Kubernetes NMState with Multus in OpenShift Virtualization

By default, VMs are connected to the default pod network. The VMs communicate with resources within the cluster, or with any resources that are accessible through the OpenShift node network. If a VM requires access to resources on a different network, then you must connect the VM to that additional network. VMs that are connected to more than one network are considered multihomed VMs.

OpenShift Virtualization uses Multus Container Network Interface (CNI) plug-ins and the Kubernetes NMState Operator resources to create a Linux bridge that connects VMs to additional networks. A Linux bridge forwards packets between connected interfaces, similar to the function of a network switch.

Figure 4.8: Linux bridge node networking with Multus and Kubernetes NMState

Connecting Virtual Machines to a Linux Bridge Network

When a Linux bridge is configured on nodes, such as through a node network configuration policy, you must create a network attachment definition (NAD) before you can connect your VMs to the bridge.

You can attach a VM to any NADs that are in the same namespace. Any NADs that you create in the default namespace are available to all VMs in the cluster.

Use the CNV Linux Bridge CNI plug-in in the network attachment definition to connect a VM to an additional network on the Linux bridge.

Figure 4.9: Creating a network attachment definition

Attaching a Secondary Interface on the Additional Network

To connect a VM to an additional network, you must attach the NAD to a new network interface on the VM.

To create a network interface on the VM, navigate to VirtualizationVirtual Machines. Click the Configuration tab and then click Network Interfaces to display the network interfaces that are connected to the VM. Click Add Network Interface, select the NAD from the Network list, and complete the remaining form. Click Add to save and attach the interface to the VM.

If your VM is in the running state, you must restart the VM to complete the process.

Figure 4.10: Attaching a secondary network interface

You can also edit a stopped VM's manifest by using the oc edit command. You must add the bridge interface to the spec.template.spec.domain.device.interfaces list. You must also specify the NAD name in the spec.template.spec.networks list.

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
    name: rhel8-vm
spec:
  template:
    spec:
      domain:
        devices:
          interfaces:
            - masquerade: {}
              name: default
            - bridge: {}
              name: nic-0 1
              model: virtio
...output omitted...
      networks:
        - name: default
          pod: {}
        - name: nic-0 2
          multus:
            networkName: br1-bridge-network 3
...output omitted...

1

The name of the secondary interface connected to the bridge network.

2

The network name. This value must match the value of the spec.template.spec.domain.devices.interfaces object.

3

The name of the network attachment definition.

Configure IP Addresses on a VM

VMs on the pod network have an ephemeral IP address that cannot be statically assigned. If your VM requires a static or dynamic IP address, then you must attach a secondary network interface to your VM that is connected to a bridge network.

Note

A DHCP server must be available on the bridge network to provide a dynamic IP address to the VM.

To configure an IP address, you can use the cloud-init service to specify a static or dynamic IP address for the secondary interface. The network device and address are defined in the spec.volumes.cloudInitNoCloud.networkData field of the VM manifest.

The following example specifies a static IP address on an eth1 network interface.

kind: VirtualMachine
spec:
...output omitted...
  volumes:
  - cloudInitNoCloud:
      networkData: |
        version: 2
        ethernets:
          eth1: 1
            addresses:
            - 192.168.51.150/24 2

1

The name of the network interface.

2

The static IP address.

See the references section for more information about configuring IP addresses with the cloud-init service.

After the configuration is complete, you can review the IP addresses of a running VM on the Overview tab.

Note

The QEMU guest agent is responsible for collecting the IP addresses of the VM. The guest agent must be installed and running on the VM operating system for OpenShift Virtualization to display the network information in the web console.

Figure 4.11: IP addresses that are assigned to a running VM

References

For more information about the NMState operator, refer to the Kubernetes NMState chapter in the Red Hat OpenShift Container Platform 4.14 Networking documentation at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/networking/index#kubernetes-nmstate

For more information about installing operators by using the OperatorHub, refer to the Installing from OperatorHub Using the Web Console section in the Administrator Tasks chapter in the Red Hat OpenShift Container Platform 4.14 Operators documentation at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/operators/index#olm-installing-from-operatorhub-using-web-console_olm-adding-operators-to-a-cluster

For more information about installing operators by using the command line, refer to the Installing from OperatorHub Using the CLI section in the Administrator Tasks chapter in the Red Hat OpenShift Container Platform 4.14 Operators documentation at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/operators/index#olm-installing-operator-from-operatorhub-using-cli_olm-adding-operators-to-a-cluster

For more information about attaching a VM on an additional network, refer to the Connecting a Virtual Machine to a Linux Bridge Network section in the Networking chapter in the Red Hat OpenShift Container Platform 4.14 Virtualization documentation at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/virtualization/index#virt-connecting-vm-to-linux-bridge

For more information about configuring IP addresses with the cloud-init service, refer to the Configuring IP Addresses for Virtual Machines section in the Networking chapter in the Red Hat OpenShift Container Platform 4.14 Virtualization documentation at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/virtualization/index#virt-configuring-viewing-ips-for-vms

Revision: do316-4.14-d8a6b80