Bookmark this page

Configure External Access to Virtual Machines

Objectives

  • Configure external access to virtual machines.

Kubernetes Services Objects with External Access

RHOCP and Kubernetes provide several mechanisms to make your applications available from outside the cluster.

Kubernetes provides different service resource types to configure external access:

  • The NodePort resource type exposes a network port, on all your cluster nodes, that redirects the incoming traffic to the service's pods or VMs.

  • The LoadBalancer resource type instructs RHOCP to activate a load balancer in a cloud environment. A load balancer instructs Kubernetes to interact with the cloud provider that the cluster is running in, to provision a load balancer. The load balancer then provides an externally accessible IP address to the application.

Red Hat OpenShift provides route resources to expose your applications to networks outside the cluster. Routes provide ingress traffic to HTTP and HTTPS traffic, TCP applications, and also to non-TCP traffic. With routes, you can access your application with a unique hostname that is publicly accessible. Kubernetes provides ingress resources that are similar to route resources, although routes provide more features, such as TLS re-encryption, TLS passthrough, and split traffic for blue-green deployments.

When you migrate a VM into RHOCP from another hypervisor technology, you can configure external access so that clients can still access the application. You can also make a database server available to services that are running outside your OpenShift cluster. You can expose legacy applications that you plan later to convert to containers. After converting an application, you reconfigure external access so that clients can continue to access the application with the same DNS name.

Configure Node Port Service Resources

Service resources use the ClusterIP resource type by default. With that type, the service is accessible only from inside the cluster.

By setting the type parameter to NodePort, Kubernetes opens the same network port on all the cluster nodes and then redirects the incoming traffic to your pods or VMs. By default, Kubernetes allocates a port in the 30000 to 32767 range.

With this configuration, an external client can reach your service by targeting the IP address of one of your cluster nodes and the allocated port.

Figure 3.22: Incoming traffic through a node port service

The following service definition declares a service resource of the NodePort type:

apiVersion: v1
kind: Service
metadata:
  name: database
  namespace: prod2
spec:
  type: NodePort  1
  selector:
    vmtype: linux
  ports:
    - protocol: TCP
      port: 3306  2
      targetPort: 3306
      nodePort: 30336  3

1

The type parameter declares the service type: ClusterIP, NodePort, or LoadBalancer.

2

Inside the cluster, the service makes the database application available on port 3306. The service behaves similarly to a ClusterIP service.

3

The nodePort parameter provides the port number that Kubernetes must open on all the cluster nodes. The port that you select must be available on all the nodes and in the range 30000 to 32767. If you do not provide the nodePort parameter, then Kubernetes selects an available port.

Although it is possible to use the NodePort service type for TCP and UDP traffic, Red Hat recommends avoiding the use of the NodePort service type, for the following reasons:

  • You must expose the IP addresses of your cluster nodes from outside the cluster.

  • You cannot select a port outside the default range (30000-32767).

  • In cloud environments, the IP address of cluster nodes might not be permanent. For example, the cloud infrastructure might add or remove nodes to adapt the cluster to the load.

Configure Load Balancer Service Resources for Cloud Environments

You can create services of the LoadBalancer type to provide external access to your applications on clusters that are deployed on cloud infrastructure. When you create that type of service, Kubernetes automatically configures an external load balancer from the cloud provider. For example, when you create a service on a cluster that is deployed on Amazon Web Services (AWS), Kubernetes creates a load balancer with Amazon Elastic Load Balancing (ELB). With a cluster that is deployed on IBM Cloud, Kubernetes creates the load balancers with IBM Cloud load balancers, and on Microsoft Azure, Kubernetes creates the load balancers with Azure Load Balancer.

Figure 3.23: Incoming traffic through the cloud provider load balancer

The following service definition declares a service resource of the LoadBalancer type, which defines a loadBalancerIP IP address:

apiVersion: v1
kind: Service
metadata:
  name: frontend
  namespace: prod2
spec:
  type: LoadBalancer  1
  loadBalancerIP: 40.121.123.12  2
  selector:
    tier: frontend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080

1

The type parameter declares the LoadBalancer service type.

2

Some cloud providers accept an IP address to assign to the load balancer. With some providers, you must reserve that address before creating the service. If you do not set the loadBalancerIP parameter, then the cloud provider assigns an ephemeral address.

Note

Because each LoadBalancer service creates a load balancer with the cloud provider, it can become expensive to use that kind of service.

Configure Load Balancer Service Resources for Non-cloud Environments

For clusters that are deployed on bare metal, services of the LoadBalancer type behave differently. Kubernetes assigns an IP address to the service from a pool of external addresses that the cluster administrators prepare.

Cluster administrators must configure these external IP addresses so that the network can route them to a cluster node. When the cluster nodes receive packets where the destination IP is the external IP of the service, they forward the traffic to the pods or virtual machines that are associated with the service.

The following service definition declares a service resource of type LoadBalancer that defines an externalIP IP address:

apiVersion: v1
kind: Service
metadata:
  name: frontend
  namespace: prod2
spec:
  type: LoadBalancer 1
  externalIPs:
    - 192.168.0.42 2
  selector:
    tier: frontend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080

1

The type parameter declares the service type: ClusterIP, NodePort, or LoadBalancer.

2

The externalIPs parameter lists the external IP addresses to assign to the service. Those addresses must be available in the pool of addresses that the cluster administrators prepare. If you do not specify an IP address, then Kubernetes selects one for you.

Note

You can use the LoadBalancer service type for TCP and UDP traffic.

The MetalLB Component

MetalLB is a load balancer component that provides a load balancing service for clusters that do not run on a cloud provider, such as a bare metal cluster, or clusters that run on hypervisors. MetalLB operates in two modes: layer 2 and Border Gateway Protocol (BGP), with different properties and requirements. You must plan the use of MetalLB to consider your requirements and your network design.

MetalLB is an operator that you can install with the Operator Lifecycle Manager. After installing the operator, you must configure MetalLB through its custom resource definitions. In most situations, you must provide MetalLB with an IP address range.

Configure Red Hat OpenShift Routes

Red Hat OpenShift routes provide ingress HTTP, HTTPS, and TLS traffic to services in the cluster. A route connects a public-facing DNS hostname to an internal-facing service IP.

Red Hat OpenShift implements routes by deploying a cluster-wide router service, which runs an HAProxy load balancer as a containerized application in the OpenShift cluster. Red Hat OpenShift scales and replicates router pods like any other application.

Figure 3.24: Incoming traffic through the Red Hat OpenShift router pod

Red Hat OpenShift can automatically assign a DNS entry to your application when creating a route. Red Hat OpenShift assigns a name in the routename-namespace.default_domain format. The RHOCP administrators configure the default_domain base name during the cluster installation. For example, if your default domain is apps.mycompany.com and you create a route named intranet in the prod project, then the DNS name is intranet-prod.apps.mycompany.com.

RHOCP administrators must configure the company's DNS system so that the default_domain wildcard DNS record points to the public-facing IP addresses of the nodes that are running the router.

Note

The DNS server that hosts the wildcard domain knows nothing about route hostnames. The server merely resolves any name to the configured IP addresses. Only the Red Hat OpenShift router knows about route hostnames, and treats each one as an HTTP virtual host. The Red Hat OpenShift router blocks invalid wildcard domain hostnames that do not correspond to any route and returns an HTTP error.

Routes work only with certain types of traffic: HTTP, HTTPS with Server Name Indication (SNI), and TLS with SNI. With the SNI, the client can send the name of the host that it tries to reach in clear text during the handshake process. Red Hat OpenShift uses that feature to identify the service to target when it receives encrypted traffic.

For other traffic types, such as UDP traffic or non-web TCP traffic, Red Hat recommends that you use services of the LoadBalancer or the NodePort type.

Create Routes

After creating a service of the ClusterIP type, which points to your pods or virtual machines, you can create a route for your application. You can create a resource file in the YAML or the JSON format to declare your route. However, Red Hat OpenShift provides the oc expose service command to simplify the creation process:

[user@host ~]$ oc expose service/web
route.route.openshift.io/web exposed

That command creates a route resource with the same name as the service. Use the --name option to define a different name.

Red Hat OpenShift automatically assigns a DNS name to the route. You can use the --hostname option followed by the DNS name to define it manually. The following command creates a db-connection route that defines the db-connection-production.apps.mycompany.com hostname.

[user@host ~]$ oc expose service/web --name web --hostname web-production.apps.mycompany.com
route.route.openshift.io/web exposed

Use the oc get route command to retrieve the DNS name that is assigned to the route:

[user@host ~]$ oc get route web
NAME  HOST/PORT                         PATH  SERVICES  PORT TERMINATION  WILDCARD
web   web-production.apps.mycompany.com       web       8080              None

From the web console, navigate to NetworkingRoutes, select a namespace, click Create Route, and then complete the form with the route details. You must name the route and select the associated service and its ports.

Figure 3.25: Create a route by using the web console

Create Path-based Routes

The route resource definition accepts the path variable to specify the path component to the URL. With that configuration, you can use the same DNS name for several services and redirect the traffic based on the path component. For example, you could use the http://intranet-prod.apps.mycompany.com/static URL to send traffic to pods or to virtual machines that serve static web content. For REST API requests, you could use the http://intranet-prod.apps.mycompany.com/api URL to forward the traffic to the application back end.

Add the --path option to the oc expose service command to configure path-based routes:

[user@host ~]$ oc expose service/static --path=/static \
  --hostname=intranet-prod.apps.mycompany.com
route.route.openshift.io/static exposed
[user@host ~]$ oc expose service/restapi --path=/api \
  --hostname=intranet-prod.apps.mycompany.com
route.route.openshift.io/restapi exposed
[user@host ~]$ oc get routes
NAME     HOST/PORT                          PATH      SERVICES   PORT ...
static   intranet-prod.apps.mycompany.com   /static   static     8080 ...
restapi  intranet-prod.apps.mycompany.com   /api      restapi    80   ...

Secure Routes

Routes can be secured (TLS or HTTPS) or unsecured (HTTP). Secure routes enable using several types of TLS termination to serve certificates to the client.

A secured route specifies the TLS termination of the route. The following list describes the available termination types:

Edge

TLS termination occurs at the router before Red Hat OpenShift routes the traffic to the pods or to the virtual machines. The router serves the TLS certificates to the clients. You can provide your certificate when you create the routes, or Red Hat OpenShift can use its certificate. Because TLS is terminated at the router, connections from the router to the endpoints over the internal network are not encrypted.

Pass through

The router sends the traffic straight to the destination pod or virtual machine. In this mode, the application is responsible for serving certificates for the traffic.

Re-encryption

Re-encryption is a variation of edge termination. The router terminates TLS with a certificate and then re-encrypts its connection to the endpoint, which might have a different certificate. Therefore, the full path of the connection is encrypted, even over the internal network.

The references section at the end provides links to documents that explain how to configure secured routes.

Configure Routes for Virtual Machines

To configure a route for a VM, you must first create a service for the virt-launcher pod of the VM:

  1. Add a label to the VirtualMachine resource in the .spec.template.metadata.labels path.

  2. Create the service resource from a definition file in YAML format. Use the label to select the virt-launcher pod. The following example expects the virt-launcher pod to have the tier: front label.

apiVersion: v1
kind: Service
metadata:
  name: web
  namespace: prod2
spec:
  type: ClusterIP
  selector:
    tier: front
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080

After you create the service, you can then create the route.

From the command line, use the oc expose service command to create the route:

[user@host ~]$ oc expose service/web
route.route.openshift.io/web exposed

Enable Traffic from the Router Pods

When you use network policies in your project, you must add a rule to enable the traffic that comes from the router pods. Otherwise, the route resources do not work, because the router pods cannot reach the destinations pods and virtual machines.

The router pods run in the openshift-ingress namespace. That namespace already has a label that you can use with your network policies. The following command shows that the label is network.openshift.io/policy-group: ingress:

[user@host ~]$ oc get namespace openshift-ingress -o yaml
kind: Namespace
apiVersion: v1
metadata:
...output omitted...
  labels:
    kubernetes.io/metadata.name: openshift-ingress
    name: openshift-ingress
    network.openshift.io/policy-group: ingress
    olm.operatorgroup.uid/44814614-edc3-4e46-9491-5f0f97af9c84: ""
    openshift.io/cluster-monitoring: "true"
    policy-group.network.openshift.io/ingress: ""
...output omitted...

Add the following NetworkPolicy resource to your project to enable the traffic from the router pods:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-from-openshift-ingress
spec:
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          network.openshift.io/policy-group: ingress  1
  podSelector: {}  2

1

The network policy enables the traffic that comes from the openshift-ingress namespace, because that namespace has the network.openshift.io/policy-group: ingress label.

2

The network policy targets all the pods in the current namespace when the podSelector parameter is empty.

Configure Kubernetes Ingress Resources

The route resources are specific to Red Hat OpenShift. Kubernetes provides ingress resources, with some of the same features as routes. Some route features, such as TLS re-encryption, are not available with ingress resources.

The following example defines an ingress resource named web:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: web
spec:
  rules:
    - host: web-production.apps.mycompany.com  1
      http:
        paths:
          - path: /  2
            pathType: Prefix  3
            backend:
              service:  4
                name: web
                port:
                  number: 8080

1

The host parameter provides the URL to assign to the application.

2

The path parameter provides the path component of the URL.

3

The pathType parameter specifies how to match the path component of the URL. If set to Exact, then the path must be identical to the path parameter. If set to Prefix, then any path that starts with the path component matches.

4

The service section provides the service to associate with the ingress resource.

Red Hat OpenShift implements ingress resources with route resources. The following example shows the route resource that Red Hat OpenShift creates:

[user@host ~]$ oc get ingress
NAMESPACE   NAME CLASS   HOSTS                 ADDRESS ...
production  web  <none>  web-production...com  router-default...com
[user@host ~]$ oc get routes
NAME      HOST/PORT                           PATH   SERVICES   PORT ...
web-djh2b web-production.apps.mycompany.com   /      web        8080-tcp ...

To create an ingress resource form the web console, navigate to NetworkingIngresses, click Create Ingress, and then complete the YAML manifest with the ingress parameters.

Figure 3.26: Create an ingress by using the web console

References

Kubernetes Ingress versus OpenShift Route

For more information, refer to the Configuring Ingress Cluster Traffic chapter in the Red Hat OpenShift Container Platform Networking guide at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/networking/index#configuring-ingress-cluster-traffic

For more information on secured routes, refer to the Secured Routes section in the Red Hat OpenShift Container Platform Networking guide at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/networking/index#configuring-default-certificate

Kubernetes: Publishing Services

Revision: do316-4.14-d8a6b80