In certain scenarios, you might authenticate external clients, such as CI/CD pipelines or monitoring tools, to the Kubernetes cluster.
Kubernetes enables external clients to authenticate to the Kubernetes API by embedding either a client certificate or an authentication token into a kubeconfig configuration file.
This feature ensures that only authorized external clients can access the Kubernetes cluster.
A service account (SA) is a Kubernetes resource that provides an identity for processes that run in a pod. SA tokens authenticate the interactions between components within the OpenShift cluster, as well as with external resources. OpenShift uses SA tokens to grant permissions to pods and other resources to interact with the Kubernetes API server. External services use SA tokens to access other OpenShift resources, or to access the Kubernetes API. With SAs, you can control API access without the need to borrow a regular user's credentials. SAs are specific to a particular project and cannot be directly shared across projects.
For example, you can use SAs in the following scenarios:
A replication controller makes API calls to create or delete pods.
A pod that collects logs might need access to certain log storage resources.
A backup and restore tool might require access to the cluster's configuration and data to back up and restore data.
An external application makes monitoring or integration API calls.
When you create an SA in OpenShift, the SA automatically contains two secrets: an API token and credentials for the OpenShift Container Registry. The automatically generated API token and credentials never expire. However, you can revoke the token and credentials by deleting the secrets, because OpenShift automatically generates a new secret.
Starting with Kubernetes 1.23, SAs do not automatically create the secrets that contain long-term credentials for accessing the Kubernetes API.
Thus, you must obtain the credentials by using the TokenRequest API.
The tokens that the TokenRequest API provides are more secure than the tokens that are stored in secrets, because the TokenRequest tokens have a bounded lifetime; other API clients cannot read the TokenRequest tokens; and OpenShift automatically invalidates the TokenRequest tokens when the pod that they are mounted into is deleted.
Although Red Hat OpenShift Container Platform (RHOCP) 4.14 is based on Kubernetes 1.27, it still creates the SA token secrets to communicate with the Kubernetes API server, because some features and workloads need the SA secrets.
However, this behavior will change in a future release.
Thus, although you can use the automatically generated token secret to authenticate the SA to the Kubernetes API server, Red Hat recommends using the TokenRequest API to generate tokens that are bound to the SA, and not relying on the automatically generated token secret.
You can still manually create long-lived API tokens for SAs.
However, Red Hat recommends using long-lived API tokens only if you cannot use the TokenRequest API and if the security exposure of a non-expiring token is acceptable to you.
To manually create long-lived API tokens for SAs, refer to https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/nodes/index#nodes-pods-secrets-creating-sa_nodes-pods-secrets
The following diagram summarizes the authentication flow by using the TokenRequest API:
An application that is running in a pod needs to authenticate to the Kubernetes API.
The | |
The | |
The application reads the SA token and uses it to authenticate to the Kubernetes API. |
For external applications, you can manually generate a bound SA token through the TokenRequest API, and use it as the bearer token for authentication to the Kubernetes API.
Generating a bound SA token in OpenShift by using the TokenRequest API does not remove or override the automatically generated token secret.
After creating the SA, use the following command to generate a bound SA token by using the TokenRequest API:
[user@host ~]# oc create token SA_name -n projectYou can set the lifetime for the SA token by using the --duration option.
The default lifetime for the SA token is one hour.
You must manually refresh the manually generated SA token before it expires, so that your application can continue authenticating to the Kubernetes API.
The following example shows a decoded JWT SA token:
HEADER ...output omitted... PAYLOAD:DATA { "aud": [ "https://kubernetes.default.svc" ], "exp": 1698059299,"iat": 1698055699,
"iss": "https://kubernetes.default.svc", "kubernetes.io": { "namespace": "my-project",
"serviceaccount": { "name": "my-sa",
"uid": "3acb4630-babe-4e85-996d-cbc80f62146f" } }, "nbf": 1698055699, "sub": "system:serviceaccount:my-project:my-sa" } SIGNATURE ...output omitted...
SA token expiration date in Unix epoch format | |
SA token issued date in Unix epoch format | |
Project for the SA token | |
Name for the SA |
OpenShift assigns an SA to every pod that you deploy. By default, OpenShift creates the following SAs for every project:
builder: OpenShift uses this SA to build pods.
By default, this SA has the system:image-builder role, so the resource can push images to any image stream in the project by using the internal Docker registry.
deployer: OpenShift uses this SA in deployment pods.
By default, this SA has the system:deployer role, so the resource can view and modify replication controllers and pods in the project.
This SA exists only for applications that use OpenShift deployment configuration resources.
default: OpenShift assigns this default SA to pods if you do not specify a different SA when you create the pods.
You can manage SAs by using the typical oc commands, such as create, get, or describe.
You can also grant roles to SAs the same as for regular users.
If you grant roles to an SA, then you must use the name of the SA together with the name of the project and the system:serviceaccount string, according to the following syntax:
system:serviceaccount:<project>:<name>
You can also use the -z option to avoid using the long system:serviceaccount: name, and use instead the short project:sa_name name.sa_name
Whenever you create an SA, it is automatically a member of the following groups:
system:serviceaccounts, which includes all the SAs in the cluster.
system:serviceaccounts:, which includes all the SAs in the specified project.<project>
Client certificate authentication in Kubernetes clusters refers to the process of authenticating clients, such as users or services, which access the Kubernetes cluster by using TLS client certificates.
By default, OpenShift provides an internal certificate authority (CA). The OpenShift internal CA is a built-in component of the OpenShift cluster that manages and issues digital certificates in the cluster. The internal CA provides a trusted source for generating X.509 certificates for secure communication, authentication, and encryption in the OpenShift cluster.
The Kubernetes API server requires client authentication by using client certificates. OpenShift is preconfigured to trust the client certificates that the OpenShift internal CA signs.
You can also configure additional client CAs for the Kubernetes API server. For more information about configuring additional client CAs for the Kubernetes API server, refer to https://access.redhat.com/solutions/6054271
This feature is mainly used to generate a client certificate for an administrator user, such as the predefined system:admin user, to use this client certificate as a backdoor for cluster administrators in the event of failure of the IdP that provides the administrator credentials.
Although Red Hat does not recommend doing so, you can also use client certificates in certain scenarios when running outside the cluster, such as CI/CD pipelines, automation playbooks, or monitoring tools. These clients need to present valid client certificates during the TLS handshake to establish a secure connection with the API server. This mechanism ensures that only authorized clients can access the API server from outside the cluster.
Red Hat recommends using SAs to run automation outside the cluster whenever you can, instead of using client certificates, because a certificate cannot be revoked in OpenShift. Revoking a client certificate would require invalidating all client certificates that the current CA ever signed, and creating replacement certificates for all client users and applications. For more information about this topic, refer to https://github.com/kubernetes/kubernetes/issues/18982
OpenShift assigns the username and the groups for the user by using the common name (CN) and the organization (O) fields from the certificate, respectively. You can use role-based access control (RBAC) rules to provide the minimal rights to the client account to perform the job. For example, you could provide view permissions for the entire cluster to a monitoring tool, or edit permissions on selected projects to a CI/CD pipeline.
To create client certificates by using the internal OpenShift CA, you must follow these steps:
Create a certificate signing request (CSR): The first step is to create a CSR for the client. You can use the OpenSSL tool to create the CSR. This request includes the client's information and the public key. You can use more than one group for the user if you require it.
[user@host ~]$ openssl req -nodes -newkey rsa:4096 -keyout key_filename \
-subj "/O=group1/O=group2/CN=username" -out csr_filenameSubmit the CSR to the Kubernetes API server: The API server interacts with the internal CA to process the CSR. The following example shows the parameters for a CSR:
[user@host ~]$ cat << EOF | oc create -f -
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: csr_name
spec:
signerName: kubernetes.io/kube-apiserver-client
expirationSeconds: 604800 # one week
request: $(base64 -w0 csr_filename)
usages:
- client auth
EOFThe CSR name. | |
OpenShift provides the | |
Expiration time for the certificate in seconds. If you do not specify a value, then the default value is 30 days. | |
Contains the OpenSSL X.509 CSR, which is encoded in Base64. | |
Specifies the use cases for the client certificate.
It must include the |
Review, approve, and sign the CSR: You can review the CSR details by using the oc describe csr command.
After reviewing the CSR details, use the following command so the internal CA signs the CSR to generate the client certificate:csr_name
[user@host ~]$ oc adm certificate approve csr_nameRetrieve the client certificate: After you approve the CSR and the OpenShift internal CA generates the certificate, you can retrieve the signed certificate from the API server.
[user@host ~]$ oc get csr csr_name -o jsonpath='{.status.certificate}' \
| base64 -d > certificate_filenameDistribute the certificate: The signed certificate, together with the private key that generates the CSR, enable the client to authenticate to the cluster.
You can use a kubeconfig file as a command-line interface (CLI) configuration file to set up profiles for use with the Kubernetes kubectl and OpenShift oc CLI tools.
Moreover, most Kubernetes client libraries use kubeconfig files in the same way as the kubectl and oc CLI tools.
Use kubeconfig files to authenticate external applications to the cluster by storing tokens and client certificates inside the kubeconfig files.
The kubeconfig file is defined as a YAML file that contains clusters, users, and contexts.
The clusters parameter in the kubeconfig file contains information about the OpenShift clusters, such as the IP or fully qualified domain name (FQDN), or the CA.
The users parameter contains the user credentials to interact with the Kubernetes API.
This parameter contains information such as the username, the user password, or the user token.
The contexts parameter contains information about the combination of a cluster and a user to interact with the Kubernetes API.
Whenever you run an oc command, you reference a context inside the kubeconfig file.
When you run any oc command, OpenShift reads a kubeconfig file in the following ways in turn:
The specified file in the --kubeconfig option, if you use it
The specified file in the KUBECONFIG environment variable, if it is set
The default kubeconfig file in ~/.kube/config
When you log in to the OpenShift cluster through the oc login command for the first time, OpenShift creates a kubeconfig file in the default location at ~/.kube/config, if the file does not exist.
You can add more authentication and connection details to the kubeconfig file automatically by using the oc login and oc project commands, or by manually editing the kubeconfig file.
You can read the details from your kubeconfig file by using the oc config view command or by opening the file with a text editor.
The following example shows the parameters for a kubeconfig file from an OpenShift cluster:
apiVersion: v1 clusters:- cluster: server: https://api.ocp4.prod.com:6443 name: production - cluster: server: https://api.ocp4.stage.com:6443 certificate-authority: ocp-apiserver-cert.crt name: stage users:
- name: admin-production user: token: REDACTED - name: admin-stage user: client-certificate: admin-stage.crt client-key: tls.key contexts:
- context: cluster: production namespace: prod-app user: admin-production name: prod-app/api-ocp4-prod-com:6443/admin-production - context: cluster: stage namespace: demo-app user: admin-stage name: demo-app/api-ocp4-stage-com:6443/admin-stage current-context: demo-app/api-ocp4-stage-com:6443/admin-stage kind: Config preferences: {}
The list of all the clusters that you already connected to.
The example defines two clusters, | |
The list of all the users that you already connected to the cluster.
The example defines two users, | |
The list of contexts that you can reference when using the |
Although you can set most kubeconfig file parameters by using the oc login and oc project OpenShift commands, you can use the oc config command to manually configure these parameters if needed, instead of directly modifying the file.
You can use the oc config command with the --kubeconfig= option to create and store filekubeconfig files with different parameters that you can use when required.
The oc config command comes from the Kubernetes kubectl CLI tool, with no modifications from OpenShift.
Use the following oc config subcommands to manually configure your kubeconfig files:
set-cluster: Creates a cluster entry in the kubeconfig file.
This subcommand accepts different cluster parameters as the server IP or the CA file.
set-credentials: Creates a user entry in the kubeconfig file.
This subcommand accepts different user parameters as the username, the user password, or the token.
set-context: Creates a context entry in the kubeconfig file.
Use this subcommand to define a context to specify a combination of a user and a cluster.
You can also set the OpenShift project.
use-context: Sets the current context.
For more information about manually configuring the kubeconfig files by using the oc config command, refer to https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/cli_tools/index#manual-configuration-of-cli-profiles_managing-cli-profiles
User impersonation in OpenShift enables certain users or SAs to act on behalf of other users or SAs with different permissions and roles. This feature is useful when administrators or privileged users need to perform actions on behalf of regular users, or when SAs need to execute tasks on behalf of other SAs, or for regular users to perform actions with administrative permissions. For example, a system administrator can debug an RBAC rule by impersonating another user and verifying whether OpenShift accepts or denies a request. As another example, you want system administrators to escalate privileges when it is necessary instead of them logging in as cluster administrators.
As a system administrator, use the --as and --as-group options when using the oc command to impersonate a user or group.
Use the oc auth can-i command to test the user access to a particular resource in the cluster.
Use the -n option to specify a project for the request, or use the -A option to verify the action in all the projects.
Thus, use the following command to test the RBAC rules for a particular user or group by impersonating them:
[user@host ~]$ oc auth can-i command --as user_to_impersonate \
--as-group group_to_impersonateYou can also list all the permissions for a specific user or group by using the oc auth can-i --list command.
To grant a regular user permissions to impersonate another user, you must create a custom role with the appropriate permissions, and then create a role binding to assign the role to the user.
For example, to allow a regular user to run commands as an administrator, you can create the following cluster role, which enables anyone to impersonate the administrator user:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: sudo-admin rules: - apiGroups: [""] resources: ["users"] verbs: ["impersonate"] resourceNames: ["admin"]
Then, bind the role to the user:
[user@host ~]$ oc create clusterrolebinding binding_name \
--clusterrole sudo-admin --user regularuser__The user can impersonate the admin user by using the --as=admin option.
The following example shows how the user cannot retrieve the node information when using their account, but can retrieve that information when impersonating the admin user:
[user@host ~]$oc get nodesError from server (Forbidden): nodes is forbidden: User "regularuser" cannot list resource "nodes" in API group "" at the cluster scope [user@host ~]$oc get nodes --as adminNAME STATUS ROLES AGE VERSION master01 Ready control-plane,master 62d v1.25.7+eab9cc9 worker01 Ready worker 57d v1.25.7+eab9cc9
For more information about using SAs for applications on OpenShift, refer to the Using Service Accounts in Applications section in the Red Hat OpenShift Container Platform 4.14 Authentication and authorization documentation at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/authentication_and_authorization/index#service-accounts-overview_using-service-accounts
For more information about certificates and certificate signing requests on OpenShift, refer to the Certificates and Certificate Signing Requests section in the Kubernetes documentation at https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/
For more information about how to configure CLI profiles on OpenShift, refer to the Managing CLI Profiles section in the Red Hat OpenShift Container Platform 4.14 CLI Tools documentation at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/cli_tools/index#about-switches-between-cli-profiles_managing-cli-profiles
For more information about user impersonation, refer to the User Impersonation section in the Kubernetes documentation at https://kubernetes.io/docs/reference/access-authn-authz/authentication/#user-impersonation