Abstract
| Goal |
Protect network traffic between applications inside and outside the cluster. |
| Objectives |
|
| Sections |
|
| Lab |
|
OpenShift Container Platform offers many ways to expose your applications to external networks.
You can expose HTTP and HTTPS traffic, TCP applications, and also non-TCP traffic.
Some of these methods are service types, such as NodePort or load balancer, whereas others use their own API resource, such as Ingress and Route.
With OpenShift routes, you can expose your applications to external networks, to reach the applications with a unique, publicly accessible hostname. Routes rely on a router plug-in to redirect the traffic from the public IP to pods.
The following diagram shows how a route exposes an application that runs as pods in your cluster:
For performance reasons, routers send requests directly to pods based on service configuration.
The dotted line in the diagram indicates this implementation. The router accesses the pods through the services network.
Routes can be either secured or unsecured. Secure routes support several types of transport layer security (TLS) termination to serve certificates to the client. Unsecured routes are the simplest to configure, because they require no key or certificates. By contrast, secured routes encrypt traffic to and from the pods.
A secured route specifies the TLS termination of the route. The following termination types are available:
OpenShift Secure Routes
With edge termination, TLS termination occurs at the router, before the traffic is routed to the pods. The router serves the TLS certificates, so you must configure them into the route; otherwise, OpenShift assigns its own certificate to the router for TLS termination. Because TLS is terminated at the router, connections from the router to the endpoints over the internal network are not encrypted.
With passthrough termination, encrypted traffic is sent straight to the destination pod without TLS termination from the router. In this mode, the application is responsible for serving certificates for the traffic. Passthrough is currently the only method that supports mutual authentication between the application and a client that accesses it.
Re-encryption is a variation on edge termination, whereby the router terminates TLS with a certificate, and then re-encrypts its connection to the endpoint, which might have a different certificate. Therefore, the full path of the connection is encrypted, even over the internal network. The router uses health checks to determine the authenticity of the host.
Before creating a secure route, you need a TLS certificate. The following command shows how to create a secure edge route with a TLS certificate:
[user@host ~]$oc create route edge \ --service api-frontend --hostname api.apps.acme.com \ --key api.key --cert api.crt![]()
The | |
The |
When using a route in edge mode, the traffic between the client and the router is encrypted, but traffic between the router and the application is not encrypted:
Network policies can help you to protect the internal traffic between your applications or between projects.
The previous example demonstrates how to create an edge route, which means an OpenShift route that presents a certificate at the edge. Passthrough routes offer a secure alternative, because the application exposes its TLS certificate. As such, the traffic is encrypted between the client and the application.
To create a passthrough route, you need a certificate and a way for your application to access it. The best way to provide the certificate is by using OpenShift TLS secrets. Secrets are exposed via a mount point into the container.
The following diagram shows how you can mount a secret resource in your container.
The application is then able to access your certificate.
Re-encrypt routes provide end-to-end encryption. First, re-encrypt routes terminate the encryption between an external client and the router. This encryption uses a certificate with a fully qualified domain name (FQDN) that is trusted by the client, such as the my-app.example.com hostname.
Then, the router re-encrypts the connection when accessing an internal cluster service. This internal communication requires a certificate for the target service with an OpenShift FQDN, such as the my-app.namespace.svc.cluster.local hostname.
The certificates for internal TLS connections require a public key infrastructure (PKI) to sign the certificate. With an OpenShift service certificate, you can mount a secret that contains a certificate and key pair into an application. This feature uses the OpenShift PKI to generate the certificate and key into a service-specific secret.
For more information about how to manage routes, refer to the Configuring Routes chapter in the Red Hat OpenShift Container Platform 4.14 Networking documentation at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/networking/index#configuring-routes
For more information about how to configure ingress cluster traffic, refer to the Configuring Ingress Cluster Traffic chapter in the Red Hat OpenShift Container Platform 4.14 Networking documentation at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/networking/index#configuring-ingress-cluster-traffic
Self-Serviced End-to-end Encryption Approaches for Applications Deployed in OpenShift