Forward cluster and pod logs to Amazon CloudWatch.
Logging for Red Hat OpenShift aggregates in a store the log messages from all the pods and nodes in your cluster. Administrators can use a web interface to search the store, and to consult the log entries.
Logging for Red Hat OpenShift is an optional subsystem that you install by deploying the Red Hat OpenShift Logging operator.
Given the often huge volume of logs, and the significant required disk space and compute resources for processing, Red Hat does not recommend hosting the whole logging stack on your Red Hat OpenShift Service on AWS (ROSA) cluster. Such a stack, to run on your worker nodes, would consume resources that should instead be available for your application workloads. You might need to deploy more compute nodes, which would increase the cost of your ROSA installation.
To prevent this extra load, you can configure the logging subsystem to forward the logs to Amazon CloudWatch. CloudWatch is an Amazon service that indexes and stores the logs that it receives from external sources. It provides a web interface for administrators to search and visualize the logs.
The logging subsystem collects various log types:
Application logs are from your workload pods.
Infrastructure logs are from the system pods and the compute nodes.
Audit logs come from the auditd daemon that runs on the compute nodes.
These logs might contain sensitive security details.
You can configure the logging subsystem to ignore audit logs.
Logging for Red Hat OpenShift relies on several components to collect, index, and render the logs:
Vector runs on the compute nodes. It collects the logs from the pods and nodes, and sends them to the log store. Vector replaces Fluentd, which was the log collector in earlier versions of the logging subsystem.
You can configure Vector to use Amazon CloudWatch as its log store.
Loki stores the logs that Vector collects. It indexes and then stores these incoming logs. Loki runs as an OpenShift application in your cluster. Loki replaces Elasticsearch, which was the log store in earlier versions of the logging subsystem.
If you configure Vector to forward the logs to Amazon CloudWatch, then you do not need to run Loki.
Kibana is a web console that administrators use to search and visualize logs from the log store. Kibana runs as an OpenShift application in your cluster.
If you configure Vector to use Amazon CloudWatch as its log store, then you do not need to run Kibana. Administrators use the AWS CLI or the CloudWatch web interface to search and visualize the logs.
Vector, which runs inside your ROSA cluster, uses the CloudWatch API to forward logs. For Vector to access the API, you need to create an AWS Identity and Access Management (IAM) policy and an IAM role.
You can create these IAM objects by using the ccoctl utility from the Cloud Credential Operator project, or you can create the IAM objects manually by using the AWS CLI.
The Cloud Credential Operator simplifies the configuration of credentials in the cloud provider infrastructure.
The project develops the ccoctl command that you download from the Red Hat Hybrid Cloud Console at https://console.redhat.com/openshift/downloads.
You can then use the ccoctl command to create the IAM resources that Vector requires to access the CloudWatch API.
The ccoctl command is available only if your workstation runs Linux.
For operating systems other than Linux, use the AWS CLI method, which a following paragraph describes.
Use the ccoctl command as follows:
In a working directory, create a CredentialsRequest resource file that describes the AWS access that Vector needs:
--- apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name:mycluster-credrequest namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action:- logs:PutLogEvents - logs:CreateLogGroup - logs:PutRetentionPolicy - logs:CreateLogStream - logs:DescribeLogGroups - logs:DescribeLogStreams effect: Allow resource: arn:aws:logs:*:*:* secretRef:
name: cloudwatch-credentials namespace: openshift-logging serviceAccountNames: - logcollector
The | |
The | |
Vector uses the |
Retrieve the ARN of the OpenID Connect identity provider that the ROSA creation process created during the cluster installation.
The ccoctl command needs that information to configure the IAM role that it creates.
$aws iam list-open-id-connect-providers{ "OpenIDConnectProviderList": [ { "Arn": "" } ] }arn:aws:iam::452954386616:oidc-provider/rh-oidc.s3.us-east-1.amazonaws.com/235a3shus1umik6dfaln9gd11d894aun
Run the ccoctl command.
Use the --credentials-requests-dir option to specify the name of the directory that stores your CredentialsRequest resource file.
$ccoctl aws create-iam-roles --namemycluster--region us-east-1--credentials-requests-dir ./credrequests--identity-provider-arn "arn:aws:iam::452954386616:oidc-provider/rh-oidc.s3.us-east-1.amazonaws.com/235a3shus1umik6dfaln9gd11d894aun"
Create the OpenShift secret object from the resource file that the ccoctl command creates.
$ oc apply -f manifests/openshift-logging-cloudwatch-credentials-credentials.yamlInstead of using the ccoctl command, you can create the IAM resources by using aws commands.
Because the ccoctl command is available only for Linux, using the AWS CLI is the only available method for other operating systems.
The following steps describe the process of creating the required AWS resources:
Create the IAM policy.
The policy groups the permitted operations.
The following policy.json file declares the IAM policy that allows Vector to pilot CloudWatch resources:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:PutLogEvents",
"logs:PutRetentionPolicy"
],
"Resource": "arn:aws:logs:*:*:*"
}
]
}Run the aws iam create-policy command to process the file.
In the following example, the command creates the RosaCloudWatch policy from the preceding policy.json file, and then displays the ARN of the new object.
$aws iam create-policy --policy-name RosaCloudWatch--policy-document file://policy.json --query Policy.Arn --output textarn:aws:iam::452954386616:policy/RosaCloudWatch
Retrieve the ARN of the OpenID Connect identity provider that the ROSA creation process created during the cluster installation. The IAM role that you create needs that information.
$aws iam list-open-id-connect-providers{ "OpenIDConnectProviderList": [ { "Arn": "" } ] }arn:aws:iam::452954386616:oidc-provider/rh-oidc.s3.us-east-1.amazonaws.com/235a3shus1umik6dfaln9gd11d894aun
Vector uses the logcollector service account in the openshift-logging project when accessing CloudWatch.
Grant this service account access to the IAM role.
The following trust-policy.json file declares the AIM role, and configures the service account access.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::452954386616:oidc-provider/rh-oidc.s3.us-east-1.amazonaws.com/235a3shus1umik6dfaln9gd11d894aun"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"rh-oidc.s3.us-east-1.amazonaws.com/235a3shus1umik6dfaln9gd11d894aun:sub": "system:serviceaccount:openshift-logging:logcollector"
}
}
}
]
}Run the aws iam create-role command to create the role that Vector can assume.
Because the role is specific to a cluster, prefix the role name with your cluster name.
This way, you can distinguish the IAM resources between your ROSA clusters.
The following command creates the mycluster-RosaCloudWatch role, and then displays the ARN of the new object.
$aws iam create-role --role-namemycluster-RosaCloudWatch--assume-role-policy-document file://trust-policy.json--query Role.Arn --output textarn:aws:iam::452954386616:role/mycluster-RosaCloudWatch
Attach the IAM policy to the IAM role:
$aws iam attach-role-policy --role-namemycluster-RosaCloudWatch--policy-arn "arn:aws:iam::452954386616:policy/RosaCloudWatch"
Store the ARN of the IAM role in an OpenShift secret.
The following cloudwatch-creds.yaml file declares the cloudwatch-credentials secret that contains the ARN of the IAM role:
apiVersion: v1
kind: Secret
metadata:
name: cloudwatch-credentials
namespace: openshift-logging
stringData:
role_arn: "arn:aws:iam::452954386616:role/mycluster-RosaCloudWatch"Use the oc apply command to create the resource:
$ oc create -f cloudwatch-creds.yamlYou install the logging subsystem in your cluster by deploying the Red Hat OpenShift Logging operator.
After installation, you configure Vector by using the ClusterLogForwarder custom resource (CR), and configure the logging subsystem by using the ClusterLogging CR.
The ROSA creation process creates the openshift-logging namespace and the operator group.
To deploy Red Hat OpenShift Logging from the command line, you need to create only the Subscription resource for the operator.
The following logging-operator.yaml file declares the Subscription resource:
--- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging spec: channel: "stable" installPlanApproval: Automatic name: cluster-logging source: redhat-operators sourceNamespace: openshift-marketplace
Use the oc apply command to deploy the operator.
$ oc apply -f logging-operator.yamlMonitor the installation process by inspecting the related ClusterServiceVersion object.
The installation completes when the PHASE column displays the Succeeded status.
$oc get csv -n openshift-loggingNAME DISPLAY VERSION ... PHASE cluster-logging.v5.6.5 Red Hat OpenShift Logging 5.6.5 ...Succeeded...output omitted...
You configure Vector by creating a ClusterLogForwarder resource.
The following vector-conf.yaml file declares the instance object:
apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instancenamespace: openshift-logging spec: outputs: - name: cw
type: cloudwatch cloudwatch:
groupBy: namespaceName
groupPrefix: rosa-
myclusterregion: us-east-1
secret: name: cloudwatch-credentials
pipelines: - name: to-cloudwatch inputRefs:
- infrastructure - audit - application outputRefs: - cw
The object name must be | |
The | |
The | |
When you set the When you set the When you set the | |
CloudWatch prefixes each log group with the name that you define in the | |
The AWS Region of your cluster. | |
The name of the secret resource that contains the ARN of the IAM role. | |
List of log types to forward.
Because the audit logs include sensitive information, you might exclude this log type from the | |
The output name to use, under the |
Use the oc apply command to create the resource:
$ oc apply -f vector-conf.yamlConfigure the logging subsystem by creating a ClusterLogging CR.
The name of the resource must be instance.
The following logging-conf.yaml file declares this resource:
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
name: instance
namespace: openshift-logging
spec:
collection:
logs:
type: vector
managementState: ManagedUse the oc apply command to create the resource:
$ oc apply -f logging-conf.yamlYou can access the log messages by using the AWS CLI or the AWS Management Console.
From the command line, use the aws logs command.
CloudWatch organizes the logs into groups:
$aws logs describe-log-groups{ "logGroups": [ { "logGroupName": "", "creationTime": 1682067556160, "metricFilterCount": 0, "arn": "arn:aws:logs:...:log-group:rosa-mycluster.audit:*", "storedBytes": 0 }, { "logGroupName": "rosa-mycluster.audit", "creationTime": 1682067555259, "metricFilterCount": 0, "arn": "arn:aws:logs:...:log-group:rosa-mycluster.application:*", "storedBytes": 0 }, { "logGroupName": "rosa-mycluster.application", "creationTime": 1682067555259, "metricFilterCount": 0, "arn": "arn:aws:logs:...:log-group:rosa-mycluster.infrastructure:*", "storedBytes": 0 } ] }rosa-mycluster.infrastructure
In each group, CloudWatch organizes the logs into streams. A stream groups the messages for a pod or for a node, for example.
The following command lists the streams in the rosa-mycluster.infrastructure group.
The logStreamName attribute includes the name of the pods and their namespace, or the name of a node and the log type.
$aws logs describe-log-streams --log-group-name rosa-{ "logStreams": [ { "logStreamName": "ip-10-0-148-7.ec2.internal.kubernetes.var.log.pods.mycluster.infrastructureopenshift-apiserver_apiserver-785568cf5-lsbzj_b90...1ded.openshift-apiserver.0.log", "creationTime": 1682351719884, "firstEventTimestamp": 1682351859573, "lastEventTimestamp": 1682407432799, ...output omitted... }, ...output omitted... { "logStreamName": "", "creationTime": 1682351707381, "firstEventTimestamp": 1682351706888, "lastEventTimestamp": 1682410840958, ...output omitted... }, ...output omitted...ip-10-0-237-202.journal.system
To retrieve all the messages in a stream, use the aws logs get-log-events command:
$aws logs get-log-events --log-group-name rosa-mycluster.infrastructure--log-stream-name ip-10-0-237-202.journal.system{ "events": [ { "timestamp": 1682410810457, "message": "{...}", "ingestionTime": 1682410811460 }, { "timestamp": 1682410810957, "message": "{...}", "ingestionTime": 1682410811460 }, ...output omitted...
To review the logs of your ROSA cluster, use the AWS Management Console at https://console.aws.amazon.com/.
When you use the AWS Management Console, be sure to select the correct region for your ROSA cluster. Otherwise, you might not find the AWS resources that you are looking for.

You access the log messages for your ROSA cluster as follows:
Navigate to → → .

Select → . The page displays the groups.

Select a group, and then a stream. You can expand the messages to access their details.

For more information about ROSA and CloudWatch, refer to the Forwarding Logs to Amazon CloudWatch section in the Forwarding Logs to External Third-party Logging Systems chapter in the Red Hat OpenShift Service on AWS 4 Logging documentation at https://access.redhat.com/documentation/en-us/red_hat_openshift_service_on_aws/4/html-single/logging/index#cluster-logging-collector-log-forward-cloudwatch_cluster-logging-external
For more information about installing the Red Hat OpenShift Logging operator, refer to the Installing the Logging Subsystem for Red Hat OpenShift chapter in the Red Hat OpenShift Container Platform 4.12 Logging documentation at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html-single/logging/index#cluster-logging-deploying
For more information about the Cloud Credential Operator, refer to the Managing Cloud Provider Credentials chapter in the Red Hat OpenShift Container Platform 4.12 Authentication and Authorization documentation at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html-single/authentication_and_authorization/index#managing-cloud-provider-credentials