Install an operator by using the command-line interface and Kubernetes manifests.
Outcomes
Install operators from the CLI with manual updates.
As the student user on the workstation machine, use the lab command to prepare your system for this exercise.
This command ensures that the cluster is ready, and removes the openshift-file-integrity namespace and File Integrity operator if they exist.
[student@workstation ~]$ lab start operators-cli
Instructions
In this exercise, you install the File Integrity operator with manual updates. The documentation of the File Integrity operator contains specific installation instructions.
For more information, refer to the Installing the File Integrity Operator Using the CLI section in the File Integrity Operator chapter in the Red Hat OpenShift Container Platform 4.14 Security and Compliance documentation at https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/security_and_compliance/index#installing-file-integrity-operator-using-cli_file-integrity-operator-installation
Log in to the OpenShift cluster as the admin user with the redhatocp password.
[student@workstation ~]$ oc login -u admin -p redhatocp \
https://api.ocp4.example.com:6443
Login successful.
...output omitted...Find the details of the File Integrity operator within the OpenShift package manifests.
View the available operators within the OpenShift Marketplace by using the oc get command.
[student@workstation ~]$oc get packagemanifestsNAME CATALOG AGEfile-integrity-operator do280 Operator Catalog Cs 37hlvms-operator do280 Operator Catalog Cs 37h compliance-operator do280 Operator Catalog Cs 37h metallb-operator do280 Operator Catalog Cs 37h kubevirt-hyperconverged do280 Operator Catalog Cs 37h
Examine the File Integrity operator package manifest by using the oc describe command.
[student@workstation ~]$oc describe packagemanifest file-integrity-operatorName:file-integrity-operator...output omitted... Spec: Status: Catalog Source:do280-catalog-csCatalog Source Display Name: do280 Operator Catalog Cs Catalog Source Namespace:openshift-marketplaceCatalog Source Publisher: Channels: ...output omitted...Install Modes: Supported: true Type: OwnNamespace Supported: true Type: SingleNamespace Supported: false Type: MultiNamespace Supported: true Type: AllNamespaces...output omitted... Name:stableDefault Channel:stablePackage Name:file-integrity-operator...output omitted...
The operator is in the do280-catalog-cs catalog source in the openshift-marketplace namespace.
The operator has a single channel with the v1 name.
The operator has the file-integrity-operator name.
Install the File Integrity operator.
By following the operator installation instructions, you must install the operator in the openshift-file-integrity namespace.
Also, you must make the operator available only in that namespace.
The File Integrity operator requires you to create a namespace with specific labels.
The operator documentation provides a YAML definition of the required namespace.
The definition is available in the ~/DO280/labs/operators-cli/namespace.yaml path.
Examine the definition and create the namespace.
apiVersion: v1
kind: Namespace
metadata:
labels:
openshift.io/cluster-monitoring: "true"
pod-security.kubernetes.io/enforce: privileged
name: openshift-file-integrity[student@workstation ~]$ oc create -f ~/DO280/labs/operators-cli/namespace.yaml
namespace/openshift-file-integrity createdCreate an operator group in the operator namespace.
The operator group targets the same namespace.
You can use the template in the ~/DO280/labs/operators-cli/operator-group.yaml path.
Edit the file and configure the namespaces.
apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: file-integrity-operator namespace:openshift-file-integrityspec: targetNamespaces: -openshift-file-integrity
Create the operator group.
[student@workstation ~]$ oc create \
-f ~/DO280/labs/operators-cli/operator-group.yaml
operatorgroup.operators.coreos.com/file-integrity-operator createdCreate the subscription in the operator namespace.
You can use the template in the ~/DO280/labs/operators-cli/subscription.yaml path.
Edit the file with the data that you obtained in a previous step.
Set the approval policy to Manual.
apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: file-integrity-operator namespace:openshift-file-integrityspec: channel:"stable"installPlanApproval:Manualname:file-integrity-operatorsource:do280-catalog-cssourceNamespace: openshift-marketplace
Create the subscription.
[student@workstation ~]$ oc create -f ~/DO280/labs/operators-cli/subscription.yaml
subscription.operators.coreos.com/file-integrity-operator createdApprove the install plan.
Examine the operator resource that the OLM created.
[student@workstation ~]$oc describe operator file-integrity-operatorName: file-integrity-operator.openshift-file-integrity ...output omitted... Status: Components: Label Selector: Match Expressions: Key: operators.coreos.com/file-integrity-operator.openshift-file-integrity Operator: Exists Refs: ...output omitted... Kind: InstallPlan Name:install-4wsq6Namespace:openshift-file-integrityAPI Version: operators.coreos.com/v1alpha1 Conditions: Last Transition Time: 2024-01-26T10:38:22Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy Last Transition Time: 2024-01-26T10:38:21Z Reason: RequiresApproval Status: TrueType: InstallPlanPendingKind: Subscription Name: file-integrity-operator Namespace: openshift-file-integrity Events: <none>
Verify that the operator has a condition of the InstallPlanPending type.
The operator can have other conditions, and they do not indicate a problem.
The operator references the install plan.
You use the install plan name in a later step.
If the install plan is not generated, then wait a few moments and run the oc describe command again.
View the install plan specification with the oc get command.
Replace the name with the install plan name that you obtained in a previous step.
[student@workstation ~]$ oc get installplan -n openshift-file-integrity \
install-4wsq6 -o jsonpath='{.spec}{"\n"}'
{"approval":"Manual","approved":false,"clusterServiceVersionNames":["file-integrity-operator.v1.3.3","file-integrity-operator.v1.3.3"],"generation":1}The install plan is set to manual approval, and the approved field is set to false.
Approve the install plan with the oc patch command.
Replace the name with the install plan name that you obtained in a previous step.
[student@workstation ~]$ oc patch installplan install-4wsq6 --type merge -p \
'{"spec":{"approved":true}}' -n openshift-file-integrity
installplan.operators.coreos.com/install-4wsq6 patchedVerify that the operator installs successfully, by using the oc describe command.
Check the latest transaction for the current status.
The installation might not complete immediately.
If the installation is not complete, then wait a few minutes and view the status again.
[student@workstation ~]$oc describe operator file-integrity-operator...output omitted... Status: Components: Label Selector: Match Expressions: Key: operators.coreos.com/file-integrity-operator.openshift-file-integrity Operator: Exists Refs: ...output omitted... Conditions: Last Transition Time:2024-01-26T18:21:03ZLast Update Time: 2024-01-26T18:21:03Z Message:install strategy completed with no errorsReason: InstallSucceeded Status: True Type: Succeeded Kind: ClusterServiceVersion Name: file-integrity-operator.v1.0.0 Namespace: openshift-file-integrity ...output omitted...
Examine the workloads in the openshift-file-integrity namespace.
[student@workstation ~]$ oc get all -n openshift-file-integrity
Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+
NAME READY STATUS RESTARTS AGE
pod/file-integrity-operator-6985588576-x2k49 1/1 Running 1 (50s ago) 56s
...output omitted...
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/file-integrity-operator 1/1 1 1 56s
NAME DESIRED CURRENT READY AGE
replicaset.apps/file-integrity-operator-6985588576 1 1 1 56sThe namespace has a ready deployment.
Test the operator to ensure that it is functional.
The operator watches FileIntegrity resources, runs file integrity checks on nodes, and creates FileIntegrityNodeStatus with the results of the checks.
Create a FileIntegrity custom resource by applying the file at ~/DO280/labs/operators-cli/worker-fileintegrity.yaml with the oc apply command.
[student@workstation ~]$ oc apply -f \
~/DO280/labs/operators-cli/worker-fileintegrity.yaml
fileintegrity.fileintegrity.openshift.io/worker-fileintegrity createdVerify that the operator functions, by viewing the worker-fileintegrity object with the oc describe command.
[student@workstation ~]$ oc describe fileintegrity worker-fileintegrity \
-n openshift-file-integrity
Name: worker-fileintegrity
Namespace: openshift-file-integrity
Labels: <none>
Annotations: <none>
API Version: fileintegrity.openshift.io/v1alpha1
Kind: FileIntegrity
...output omitted...
Spec:
Config:
Grace Period: 900
Max Backups: 5
Node Selector:
node-role.kubernetes.io/worker:
Tolerations:
Effect: NoSchedule
Key: node-role.kubernetes.io/master
Operator: Exists
Effect: NoSchedule
Key: node-role.kubernetes.io/infra
Operator: Exists
Events: <none>Use oc edit to edit the Grace Period to 60 in the FileIntegrity custom resource to trigger a failure.
[student@workstation ~]$oc edit fileintegrity worker-fileintegrity \ -n openshift-file-integrityName: worker-fileintegrity Namespace: openshift-file-integrity Labels: <none> Annotations: <none> API Version: fileintegrity.openshift.io/v1alpha1 Kind: FileIntegrity ...output omitted... Spec: Config:Grace Period: 60Max Backups: 5 Node Selector: node-role.kubernetes.io/worker: Tolerations: Effect: NoSchedule Key: node-role.kubernetes.io/master Operator: Exists Effect: NoSchedule Key: node-role.kubernetes.io/infra Operator: Exists Events: <none>
Verify that the operator automatically creates a FileIntegrityNodeStatus object, by using the oc get command.
You might need to wait a few minutes for the object to generate.
The first file integrity resource that you create might not work correctly.
If the operator does not create the FileIntegrityNodeStatus resource in a few minutes, then delete the FileIntegrity resource and create it again.
The exercise outcome does not depend on obtaining a FileIntegrityNodeStatus resource.
[student@workstation ~]$ oc get fileintegritynodestatuses \
-n openshift-file-integrity
NAME NODE STATUS
worker-fileintegrity-master01 master01 SucceededAfter FileIntegrityNodeStatus has successfully been created, run this as the admin user to modify the node's filesystem: oc debug node/master01 — touch /host/etc/foobar
[student@workstation ~]$ oc debug node/master01 -- touch /host/etc/foobar
Starting pod/master01-debug-l92pd ...
To use host binaries, `run chroot /host`
Removing debug pod ...Run oc get configmaps -n openshift-file-integrity to list configmaps in the openshift-file-integrity namespace.
[student@workstation ~]$oc get configmaps -n openshift-file-integrity --watchNAME DATA AGE aide-pause 1 109m aide-reinit 1 109maide-worker-fileintegrity-master01-failed 1 108mkube-root-ca.crt 1 117m openshift-service-ca.crt 1 117m worker-fileintegrity 1 109m
It may take several minutes for aide-worker-fileintegrity-master01-failed to show. Use the --watch flag and wait a few minutes until the failed configmap shows to move on to the next step. Press Ctrl+C to exit.
Run oc describe to view the report in aide-worker-fileintegrity-master01-failed configmap in the openshift-file-integrity namespace.
[student@workstation ~]$ oc describe \
configmap/aide-worker-fileintegrity-master01-failed \
-n openshift-file-integrity
Name: aide-worker-fileintegrity-master01-failed
Namespace: openshift-file-integrity
Labels: file-integrity.openshift.io/node=master01
file-integrity.openshift.io/owner=worker-fileintegrity
file-integrity.openshift.io/result-log=
Annotations: file-integrity.openshift.io/files-added: 1
file-integrity.openshift.io/files-changed: 0
file-integrity.openshift.io/files-removed: 0
Data
\====
integritylog:
\----
Start timestamp: 2024-01-26 18:31:16 +0000 (AIDE 0.16)
AIDE found differences between database and filesystem!!
Summary:
Total number of entries: 32359
Added entries: 1
Removed entries: 0
Changed entries: 0
---------------------------------------------------
Added entries:
---------------------------------------------------
f++++++++++++++++: /hostroot/etc/cni/multus/certs/multus-client-2024-01-26-15-14-01.pem
f++++++++++++++++: /hostroot/etc/foobar
---------------------------------------------------
The attributes of the (uncompressed) database(s):
---------------------------------------------------
/hostroot/etc/kubernetes/aide.db.gz
MD5 : UswXQiVa/VpjlXF1rCP0vA==
SHA1 : s6t06MCRrDgc4xOWnX6vk5rflGU=
RMD160 : jvDdvAOC7/tI0TjDe7Kzmy5nUk8=
TIGER : TjW192YTQBmG4oGza7siI6CBRnztgrp6
SHA256 : E8rWurdI9HgGP6402qWY+lDAaLoGiyNs
PEka/siI1F0=
SHA512 : JPDhgoEnNiTaDLqawkGtHplRW8f6zm3g
jDB3E6X6XM4+13yhjwh/pokFAp5BhRSc
0C4XXibXsS4OYxYiE5hBaw==
End timestamp: 2024-01-26 18:31:45 +0000 (run time: 0m 29s)
BinaryData
\====
Events: <none>