Bookmark this page

Chapter 5.  Expose non-HTTP/SNI Applications

Abstract

Goal

Expose applications to external access without using an ingress controller.

Objectives
  • Expose applications to external access by using load balancer services.

  • Expose applications to external access by using a secondary network.

Sections
  • Load Balancer Services (and Guided Exercise)

  • Multus Secondary Networks (and Guided Exercise)

Lab
  • Expose non-HTTP/SNI Applications

Load Balancer Services

Objectives

  • Expose applications to external access by using load balancer services.

Exposing Non-HTTP Services

When you use Kubernetes, you run workloads that provide services to users. You create resources such as deployments to run workloads, for example a web application. Ingresses and routes provide a way to expose the services that these workloads implement. However, in some scenarios, ingresses and routes are not sufficient to expose the service that a pod provides.

Many internet services implement a process that listens on a given port and IP address. For example, a service that uses the 1.2.3.4 IP address runs an SSH server that listens on port 22. Clients connect to port 22 on that IP address to use the SSH service.

Web servers implement the HTTP protocol and other related protocols such as HTTPS.

Kubernetes ingresses and OpenShift routes use the virtual hosting property of the HTTP protocol to expose web services that are running on the cluster. Ingresses and routes run a single web server that uses virtual hosting to route each incoming request to a Kubernetes service by using the request hostname.

For example, ingresses can route requests for the https://a.example.com URL to a Kubernetes service in the cluster, and can route requests for the https://b.example.com URL to a different service in the cluster.

However, many protocols do not have equivalent features. Ingress and route resources can expose only HTTP services. To expose non-HTTP services, you must use a different resource. Because these resources cannot expose multiple services on the same IP address and port, they require more setup effort, and might require more resources, such as IP addresses.

Important

Preferably use ingresses and routes to expose services when possible.

Kubernetes Services

Kubernetes workloads are flexible resources that can create many pods. By creating multiple pods for a workload, Kubernetes can provide increased reliability and performance. If a pod fails, then other pods can continue providing a service. With multiple pods, which possibly run on different systems, workloads can use more resources for increased performance.

However, if many pods provide a workload service, then users of the service can no longer access the service by using the combination of a single IP address and a port. To provide transparent access to workload services that run on multiple pods, Kubernetes uses resources of the Service type. A service resource contains the following information:

  • A selector that describes the pods that run the service

  • A list of the ports that provide the service on the pods

Different types of Kubernetes services exist, each with different purposes:

Internal communication

Services of the ClusterIP type provide service access within the cluster.

Exposing services externally

Services of the NodePort and LoadBalancer types, as well as the use of the external IP feature of ClusterIP services, expose services that are running in the cluster to outside the cluster.

Different providers can implement Kubernetes services, by using the type field of the service resource.

Although these services are useful in specific scenarios, some services require extra configuration, and they can pose security challenges. Load balancer services have fewer limitations and provide load balancing.

Load Balancer Services

Load balancer services require the use of network features that are not available in all environments.

For example, cloud providers typically provide their own load balancer services. These services use features that are specific to the cloud provider.

If you run a Kubernetes cluster on a cloud provider, controllers in Kubernetes use the cloud provider's APIs to configure the required cloud provider resources for a load balancing service. On environments where managed load balancer services are not available, you must configure a load balancer component according to the specifics of your network.

The MetalLB Component

MetalLB is a load balancer component that provides a load balancing service for clusters that do not run on a cloud provider, such as a bare metal cluster, or clusters that run on hypervisors. MetalLB operates in two modes: layer 2 and Border Gateway Protocol (BGP), with different properties and requirements. You must plan the use of MetalLB to consider your requirements and your network design.

MetalLB is an operator that you can install with the Operator Lifecycle Manager. After installing the operator, you must configure MetalLB through its custom resource definitions. In most situations, you must provide MetalLB with an IP address range.

Using Load Balancer Services

When a cluster has a configured load balancer component, you can create services of the LoadBalancer type to expose non-HTTP services outside the cluster.

For example, the following resource definition exposes port 1234 on pods with the example value for the name label.

apiVersion: v1
kind: Service
metadata:
  name: example-lb
  namespace: example
spec:
  ports:
  - port: 1234 1
    protocol: TCP
    targetPort: 1234
  selector:
    name: example 2
  type: LoadBalancer 3

1

Exposed port

2

Pod selector

3

LoadBalancer service type You can also use the kubectl expose command with the --type LoadBalancer argument to create load balancer services imperatively.

After you create the service, the load balancer component updates the service resource with information such as the public IP address where the service is available.

[user@host ~]$ kubectl get service
NAME         TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)          AGE
example-lb   LoadBalancer   172.30.21.79     192.168.50.20   1234:31265/TCP   4m7s

You can now connect to the service on port 1234 of the 192.168.50.20 address.

You can also obtain the address from the status field of the resource.

[user@host ~]$ oc get example-lb -o jsonpath="{.status.loadBalancer.ingress}"
[{"ip":"192.168.50.20"}]

Each load balancer service allocates IP addresses for services by following different processes. For example, when installing MetalLB, you must provide ranges of IPs that MetalLB assigns to services.

After exposing a service by using a load balancer, always verify that the service is available from your intended network locations. Use a client for the exposed protocol to ensure connectivity, and test that load balancing works as expected. Some protocols might require further adjustments to work correctly behind a load balancer. You can also use network debugging tools, such as the ping and traceroute commands to examine connectivity.

References

For more information, refer to the Load Balancing with MetalLB chapter in the Red Hat OpenShift Container Platform 4.14 Networking documentation at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/networking/index#load-balancing-with-metallb

Kubernetes Services

MetalLB on OpenShift

Revision: do280-4.14-08d11e1