By default, Kubernetes stores logs from pods in the local disk on the nodes, and the information is lost when the pod is deleted.
The OpenShift Logging operator collects and aggregates the log messages in your cluster, to keep access to the logs.
OpenShift Logging is based on the following components:
A collector, such as Vector, to collect logs from all running containers and cluster nodes.
A log store, such as Grafana Loki, to aggregate logs from the entire cluster into a central place and to provide access control to logs.
A visualization console, such as the OpenShift Logging UI, to view and query logs in the internal log store.
You can configure OpenShift Logging to forward logs to a third-party logging system, or use an internal log store for short-term log retention.
Although by default, the OpenShift Logging operator includes logs for the infrastructure and the application, you must configure a log forwarder to include audit logs in the log store.
You can deploy the Event Router component in OpenShift Logging to log Kubernetes events.
Loki uses LogQL as the query language for filtering and searching logs.