RHOCP and Kubernetes provide several mechanisms to make your applications available from outside the cluster.
Kubernetes provides different service resource types to configure external access:
The NodePort resource type exposes a network port, on all your cluster nodes, that redirects the incoming traffic to the service's pods or VMs.
The LoadBalancer resource type instructs RHOCP to activate a load balancer in a cloud environment.
A load balancer instructs Kubernetes to interact with the cloud provider that the cluster is running in, to provision a load balancer.
The load balancer then provides an externally accessible IP address to the application.
Red Hat OpenShift provides route resources to expose your applications to networks outside the cluster.
Routes provide ingress traffic to HTTP and HTTPS traffic, TCP applications, and also to non-TCP traffic.
With routes, you can access your application with a unique hostname that is publicly accessible.
Kubernetes provides ingress resources that are similar to route resources, although routes provide more features, such as TLS re-encryption, TLS passthrough, and split traffic for blue-green deployments.
When you migrate a VM into RHOCP from another hypervisor technology, you can configure external access so that clients can still access the application. You can also make a database server available to services that are running outside your OpenShift cluster. You can expose legacy applications that you plan later to convert to containers. After converting an application, you reconfigure external access so that clients can continue to access the application with the same DNS name.
Service resources use the ClusterIP resource type by default.
With that type, the service is accessible only from inside the cluster.
By setting the type parameter to NodePort, Kubernetes opens the same network port on all the cluster nodes and then redirects the incoming traffic to your pods or VMs.
By default, Kubernetes allocates a port in the 30000 to 32767 range.
With this configuration, an external client can reach your service by targeting the IP address of one of your cluster nodes and the allocated port.
The following service definition declares a service resource of the NodePort type:
apiVersion: v1 kind: Service metadata: name: database namespace: prod2 spec: type: NodePortselector: vmtype: linux ports: - protocol: TCP port: 3306
targetPort: 3306 nodePort: 30336
The | |
Inside the cluster, the service makes the database application available on port 3306.
The service behaves similarly to a | |
The |
Although it is possible to use the NodePort service type for TCP and UDP traffic, Red Hat recommends avoiding the use of the NodePort service type, for the following reasons:
You must expose the IP addresses of your cluster nodes from outside the cluster.
You cannot select a port outside the default range (30000-32767).
In cloud environments, the IP address of cluster nodes might not be permanent. For example, the cloud infrastructure might add or remove nodes to adapt the cluster to the load.
You can create services of the LoadBalancer type to provide external access to your applications on clusters that are deployed on cloud infrastructure.
When you create that type of service, Kubernetes automatically configures an external load balancer from the cloud provider.
For example, when you create a service on a cluster that is deployed on Amazon Web Services (AWS), Kubernetes creates a load balancer with Amazon Elastic Load Balancing (ELB).
With a cluster that is deployed on IBM Cloud, Kubernetes creates the load balancers with IBM Cloud load balancers, and on Microsoft Azure, Kubernetes creates the load balancers with Azure Load Balancer.
The following service definition declares a service resource of the LoadBalancer type, which defines a loadBalancerIP IP address:
apiVersion: v1 kind: Service metadata: name: frontend namespace: prod2 spec: type: LoadBalancerloadBalancerIP: 40.121.123.12
selector: tier: frontend ports: - protocol: TCP port: 80 targetPort: 8080
The | |
Some cloud providers accept an IP address to assign to the load balancer.
With some providers, you must reserve that address before creating the service.
If you do not set the |
Because each LoadBalancer service creates a load balancer with the cloud provider, it can become expensive to use that kind of service.
For clusters that are deployed on bare metal, services of the LoadBalancer type behave differently.
Kubernetes assigns an IP address to the service from a pool of external addresses that the cluster administrators prepare.
Cluster administrators must configure these external IP addresses so that the network can route them to a cluster node. When the cluster nodes receive packets where the destination IP is the external IP of the service, they forward the traffic to the pods or virtual machines that are associated with the service.
The following service definition declares a service resource of type LoadBalancer that defines an externalIP IP address:
apiVersion: v1 kind: Service metadata: name: frontend namespace: prod2 spec: type: LoadBalancerexternalIPs: - 192.168.0.42
selector: tier: frontend ports: - protocol: TCP port: 80 targetPort: 8080
The | |
The |
You can use the LoadBalancer service type for TCP and UDP traffic.
MetalLB is a load balancer component that provides a load balancing service for clusters that do not run on a cloud provider, such as a bare metal cluster, or clusters that run on hypervisors. MetalLB operates in two modes: layer 2 and Border Gateway Protocol (BGP), with different properties and requirements. You must plan the use of MetalLB to consider your requirements and your network design.
MetalLB is an operator that you can install with the Operator Lifecycle Manager. After installing the operator, you must configure MetalLB through its custom resource definitions. In most situations, you must provide MetalLB with an IP address range.
Red Hat OpenShift routes provide ingress HTTP, HTTPS, and TLS traffic to services in the cluster. A route connects a public-facing DNS hostname to an internal-facing service IP.
Red Hat OpenShift implements routes by deploying a cluster-wide router service, which runs an HAProxy load balancer as a containerized application in the OpenShift cluster. Red Hat OpenShift scales and replicates router pods like any other application.
Red Hat OpenShift can automatically assign a DNS entry to your application when creating a route.
Red Hat OpenShift assigns a name in the format.
The RHOCP administrators configure the routename-namespace.default_domain base name during the cluster installation.
For example, if your default domain is default_domainapps.mycompany.com and you create a route named intranet in the prod project, then the DNS name is intranet-prod.apps.mycompany.com.
RHOCP administrators must configure the company's DNS system so that the wildcard DNS record points to the public-facing IP addresses of the nodes that are running the router.default_domain
The DNS server that hosts the wildcard domain knows nothing about route hostnames. The server merely resolves any name to the configured IP addresses. Only the Red Hat OpenShift router knows about route hostnames, and treats each one as an HTTP virtual host. The Red Hat OpenShift router blocks invalid wildcard domain hostnames that do not correspond to any route and returns an HTTP error.
Routes work only with certain types of traffic: HTTP, HTTPS with Server Name Indication (SNI), and TLS with SNI. With the SNI, the client can send the name of the host that it tries to reach in clear text during the handshake process. Red Hat OpenShift uses that feature to identify the service to target when it receives encrypted traffic.
For other traffic types, such as UDP traffic or non-web TCP traffic, Red Hat recommends that you use services of the LoadBalancer or the NodePort type.
After creating a service of the ClusterIP type, which points to your pods or virtual machines, you can create a route for your application.
You can create a resource file in the YAML or the JSON format to declare your route.
However, Red Hat OpenShift provides the oc expose service command to simplify the creation process:
[user@host ~]$ oc expose service/web
route.route.openshift.io/web exposedThat command creates a route resource with the same name as the service.
Use the --name option to define a different name.
Red Hat OpenShift automatically assigns a DNS name to the route.
You can use the --hostname option followed by the DNS name to define it manually.
The following command creates a db-connection route that defines the db-connection-production.apps.mycompany.com hostname.
[user@host ~]$ oc expose service/web --name web --hostname web-production.apps.mycompany.com
route.route.openshift.io/web exposedUse the oc get route command to retrieve the DNS name that is assigned to the route:
[user@host ~]$oc get route webNAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD webweb-production.apps.mycompany.comweb 8080 None
From the web console, navigate to → , select a namespace, click , and then complete the form with the route details. You must name the route and select the associated service and its ports.

The route resource definition accepts the path variable to specify the path component to the URL.
With that configuration, you can use the same DNS name for several services and redirect the traffic based on the path component.
For example, you could use the http://intranet-prod.apps.mycompany.com/static URL to send traffic to pods or to virtual machines that serve static web content.
For REST API requests, you could use the http://intranet-prod.apps.mycompany.com/api URL to forward the traffic to the application back end.
Add the --path option to the oc expose service command to configure path-based routes:
[user@host ~]$oc expose service/static --path=/static \ --hostname=intranet-prod.apps.mycompany.comroute.route.openshift.io/static exposed [user@host ~]$oc expose service/restapi --path=/api \ --hostname=intranet-prod.apps.mycompany.comroute.route.openshift.io/restapi exposed [user@host ~]$oc get routesNAME HOST/PORT PATH SERVICES PORT ... static intranet-prod.apps.mycompany.com /static static 8080 ... restapi intranet-prod.apps.mycompany.com /api restapi 80 ...
Routes can be secured (TLS or HTTPS) or unsecured (HTTP). Secure routes enable using several types of TLS termination to serve certificates to the client.
A secured route specifies the TLS termination of the route. The following list describes the available termination types:
TLS termination occurs at the router before Red Hat OpenShift routes the traffic to the pods or to the virtual machines. The router serves the TLS certificates to the clients. You can provide your certificate when you create the routes, or Red Hat OpenShift can use its certificate. Because TLS is terminated at the router, connections from the router to the endpoints over the internal network are not encrypted.
The router sends the traffic straight to the destination pod or virtual machine. In this mode, the application is responsible for serving certificates for the traffic.
Re-encryption is a variation of edge termination. The router terminates TLS with a certificate and then re-encrypts its connection to the endpoint, which might have a different certificate. Therefore, the full path of the connection is encrypted, even over the internal network.
The references section at the end provides links to documents that explain how to configure secured routes.
To configure a route for a VM, you must first create a service for the virt-launcher pod of the VM:
Add a label to the VirtualMachine resource in the .spec.template.metadata.labels path.
Create the service resource from a definition file in YAML format.
Use the label to select the virt-launcher pod.
The following example expects the virt-launcher pod to have the tier: front label.
apiVersion: v1
kind: Service
metadata:
name: web
namespace: prod2
spec:
type: ClusterIP
selector:
tier: front
ports:
- protocol: TCP
port: 80
targetPort: 8080After you create the service, you can then create the route.
From the command line, use the oc expose service command to create the route:
[user@host ~]$ oc expose service/web
route.route.openshift.io/web exposedWhen you use network policies in your project, you must add a rule to enable the traffic that comes from the router pods.
Otherwise, the route resources do not work, because the router pods cannot reach the destinations pods and virtual machines.
The router pods run in the openshift-ingress namespace.
That namespace already has a label that you can use with your network policies.
The following command shows that the label is network.openshift.io/policy-group: ingress:
[user@host ~]$oc get namespace openshift-ingress -o yamlkind: Namespace apiVersion: v1 metadata: ...output omitted... labels: kubernetes.io/metadata.name: openshift-ingress name: openshift-ingressnetwork.openshift.io/policy-group: ingressolm.operatorgroup.uid/44814614-edc3-4e46-9491-5f0f97af9c84: "" openshift.io/cluster-monitoring: "true" policy-group.network.openshift.io/ingress: "" ...output omitted...
Add the following NetworkPolicy resource to your project to enable the traffic from the router pods:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-from-openshift-ingress
spec:
ingress:
- from:
- namespaceSelector:
matchLabels:
network.openshift.io/policy-group: ingress
podSelector: {} 
The route resources are specific to Red Hat OpenShift.
Kubernetes provides ingress resources, with some of the same features as routes.
Some route features, such as TLS re-encryption, are not available with ingress resources.
The following example defines an ingress resource named web:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web
spec:
rules:
- host: web-production.apps.mycompany.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 8080The | |
The | |
The | |
The |
Red Hat OpenShift implements ingress resources with route resources.
The following example shows the route resource that Red Hat OpenShift creates:
[user@host ~]$oc get ingressNAMESPACE NAME CLASS HOSTS ADDRESS ... productionweb<none> web-production...com router-default...com [user@host ~]$oc get routesNAME HOST/PORT PATH SERVICES PORT ...web-djh2bweb-production.apps.mycompany.com / web 8080-tcp ...
To create an ingress resource form the web console, navigate to → , click , and then complete the YAML manifest with the ingress parameters.

Kubernetes Ingress versus OpenShift Route
For more information, refer to the Configuring Ingress Cluster Traffic chapter in the Red Hat OpenShift Container Platform Networking guide at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/networking/index#configuring-ingress-cluster-traffic
For more information on secured routes, refer to the Secured Routes section in the Red Hat OpenShift Container Platform Networking guide at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/networking/index#configuring-default-certificate