Bookmark this page

Introducing Software-defined Networking

Objectives

After completing this section, you should be able to describe software defined networking, Open Virtual Networking switch architecture, and table-driven flow concepts.

Introduction to Open Virtual Network (OVN)

OVN is an open source project, launched by the Open vSwitch team. The idea was to create a vendor neutral protocol for the virtualization of virtual switching. It provides layer 2 and layer 3 networking, where other Software-defined Network (SDN) solutions commonly provide only layer 2 or layer 3. OVN allows for the implementation of security groups, and includes a DHCP service, layer 3 routing, and NAT. In Red Hat OpenStack, OVN exclusively uses the GENEVE tunnel overlay network.

OVN is the default SDN solution for Red Hat OpenStack. It replaces the OVS ML2 driver and the neutron agent with the OVN ML2 driver. The OVS ML2 driver had some limitations and complications. The OVN ML2 driver addresses those problems. The transition to OVN is natural and seamless because it complements the existing Open vSwitch technology already implemented in OpenStack. Scalability is improved compared to other SDN solutions because it does not use the neutron agents. Instead, it uses the ovn-controller and OVS flows to implement all functionality. The ovn-controller is the local controller daemon for OVN and it runs on all hosts. It connects to instances and containers without provisioning physical network resources, which helps to improve performance.

OVN eliminates the need for Linux bridges, dnsmasq instances, and namespaces.

OVN Architecture

The OpenStack networking configuration is translated into an OVN logical networking configuration using the OVN ML2 Plug-in. The plug-in runs on the controller nodes.

The OVN Northbound (NB) database stores the logical OVN networking configuration, which it gets from the OVN ML2 plug-in. The plug-in runs on the controller nodes and listens on TCP port 6641.

The OVN Northbound service converts the logical network configuration from the OVN NB database to the logical path flows. The ovn-northd daemon populates the OVN Southbound database with the logical path flows. The daemon runs on the controller nodes.

The OVN Southbound (SB) database listens on port 6642. The ovn-controller connects to the Southbound database to control and monitor network traffic.

The OVN metadata agent spawns the HAProxy instances. These instances manage the OVS interfaces, network namespaces and HAProxy processes.

Figure 3.7: OVN architecture

OVN Database

The OVN database is installed in a central location. It can be installed on a physical node, a virtual node, or on a cluster. The choice of location depends on various factors, including the size of the cloud infrastructure, the geographic dispersion of the cloud, the volume of traffic, and the performance required. The hypervisors must run Open vSwitch for OVN to work.

There are two parts to the OVN database: the Northbound Database and the Southbound Database. The Northbound Database receives information about the logical network configuration from the Neutron plug-in. It has two clients, the Neutron plug-in and ovn-northd. The ovn-northd client connects to the OVN Northbound Database and the OVN Southbound Database. It translates the logical network configuration into logical data path flows and stores them in the OVN Southbound Database.

The OVN Southbound database is the center of the entire system. It also has two clients, the ovn-northd and the ovn-controller services. Each hypervisor has its own ovn-controller. The database contains three types of data:

  • Physical Network tables specifying how to reach the overcloud nodes

  • Logical Network tables specifying the logical data path flows

  • Binding tables linking the location of logical network components to the physical network

The Physical Network tables and Binding tables are populated by the hypervisors. The Logical Network tables are populated by ovn-northd.

Figure 3.8: OVN control plane architecture

OVN Gateway Router

The OVN Gateway links the overlay network, managed by ovn-northd, to the physical network. There are two ways to link the overlay and physical networks: a layer 2 bridge from an OVN logical switch into a VLAN, or a layer 3 connection between an OVN router and the physical network.

OVN DHCP

OVN implements DHCPv4 support, which removes the need for a DHCP agent. Virtual networks no longer require a DHCP namespace or a dnsmasq process. DHCPv4 options are configured on each compute node running ovn-controller. This means that DHCP support is fully distributed. DHCP requests from the instances are also handled by ovn-controller. The database creates a new entry when a subnet is created. The ovn-northd service adds the logical flows for each logical port where DHCP options are defined.

OVN Security Groups

In previous OpenStack versions, security groups were implemented by OVS and ML2 using iptables. The iptables rules could only be applied using a Linux bridge, and were connected into the kernel using a tap device. The Linux bridge connected to the OVS bridge using a veth pair. These extra layers were unnecessarily complex. Instead, OVN uses the conntrack module to implement security groups. When an instance is created, logical flows are automatically created for each rule in the security group. Those rules are stored on each compute node.

OpenFlow

OpenFlow is a network protocol designed to manage and direct traffic among routers and switches, both virtual and physical. Devices wanting to communicate to the SDN Controllers must support the OpenFlow protocol.

OVN and OpenFlow

OVN is managed with the OpenFlow protocol. The OpenFlow protocol is used to program an Open vSwitch pipeline, and defines how traffic should be handled. A series of flow tables exist, where each flow has a priority, a match, and a set of actions. The flow with the highest priority is executed first. OpenFlow is capable of dynamically rewriting flow tables, allowing it to add and remove network functions as required.

OVN Logical Flows

OVN Logical Flows are a representation of the system's configuration. Manually programming OpenFlow pipelines, or flows, would be virtually impossible to manage. OVN's SDN controller creates flows automatically across all switches and network components.

Logical flows are similar to OpenFlow concepts, with priorities, a match, and actions. Logical flows describe the detailed behavior of an entire network. OVN creates the network in logical flows which are distributed to each hypervisor's ovn-controller. Each ovn-controller translates the logical flows into OpenFlow, describing how to reach other hypervisors.

OVN defines logical switches and ports, with both ingress and egress pipelines created. A packet entering the network traverses the ingress pipeline on the originating hypervisor. If the destination is on the same hypervisor, the egress pipeline is executed. If the destination is remote, the packet is sent through a GENEVE tunnel and the egress pipeline is executed on the remote host.

Logical Flows Explained

As mentioned, OVN uses logical flows for all communication. In the past, a qdhcp namespace running dnsmasq provided the DHCP service. This is now provided by OpenFlow and the ovn-controller daemon. The service is distributed across compute nodes. The OVN flows are stored in OVN tables on br-int.

All networks are managed using OpenFlow logical flows. OVN provides a private IP address for each instance created. This is true for networks using DHCP and networks created with an allocation pool. OVN does not distinguish between the two; it creates the necessary flows and stores them in the OVN tables.

Instances on the same network do not require a router to communicate, assuming that the security group allows ingress and egress communication between them. Therefore, no router OVN flows are created until a router is created on the network. OVN creates and stores the routing rules in the OVN tables on br-int and br-ex on controller nodes ensuring communication between networks.

Every instance is created with a security group. If no security group is defined during creation then the default security group is used. Access to instances is managed by OVN. When an instance is created, the logical flows are created in the OVN tables on br-int on the compute node.

Controller and compute nodes communicate with each other on a single layer 2 network. Therefore, there is no requirement for a router to enable communication between the different nodes. The eth1 NICs in the diagram below do not have layer 3 IP addresses as they are not required. An overlay tunnel is created between each two pairs of hosts in the overcloud. They use a unique ID for each tunnel as the port name on br-int.

The br-int switch uses a separate VLAN ID for each object. For example, an instance connects to a port on br-int. OVN assigns a unique VLAN ID for that connection. This means that every network entity is isolated.

A newly designed metadata server uses one HAProxy instance for each tenant provider network resulting in multiple metadata instances with namespaces. This is true only for networks where there are instances deployed. If a network has no instances, or the instances have been removed, the metadata instance will not exist. Unlike other services implemented in OVN, HAProxy is a service daemon and therefore requires namespaces for network isolation.

Figure 3.9: OVN logical flows

OVN logical flows are created and removed automatically each time the infrastructure changes. This is one of the primary benefits of OVN. For example, when a router is created all flows pertaining to that network element are created and stored in the OVN tables on the required bridges on each required node. If that router is removed all flows pertaining to that router are removed from all OVN tables on all nodes.

 

References

Further information is available in the Networking with Open Virtual Network for Red Hat OpenStack Platform at https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html-single/networking_guide/index/

Revision: cl110-16.1-4c76154