After completing this section, you should be able to describe manual failover and acceptance tests, and check status on the HA cluster.
Whenever the RHEL HA cluster manages the SAP resources, it is a good practice to use only cluster commands to stop and start the SAP resources.
As an administrator, you can stop at any time the SAP resources and resource groups that the RHEL HA cluster manages, to ensure that they are not running on the cluster.
To stop a running resource entirely and to prevent the cluster from starting it again, run the pcs resource disable <resourcename> command.
[root@node ~]# pcs resource disable <sapresource>Enable the cluster to start the resource or resource group:
Use the pcs resource enable <resourcename> command.
[root@node ~]# pcs resource enable <sapresource>After running any of these commands, it is highly recommended to wait until the resources are completely in disabled state, because the SAP resources might take time to start or stop.
Use the pcs status command to verify the status of the cluster resources.
[root@node ~]# pcs status --fullYou can also verify the status of the cluster in real time by using the crm_mon command.
This command occupies the terminal; therefore you must press Ctrl+c to return to the prompt.
Run the following command to start the real-time monitoring:
[root@node ~]# crm_mon -ATo list all the resources, you can use the following command:
[root@node ~]# pcs resourceCluster resources and resource groups might be moved away from the cluster node where they are currently running.
Optionally, specify a target node to indicate on which node to run the cluster resource or resource group that you are moving.
If a cluster node has a maintenance window, for example to apply errata, then use this feature:
Run the pcs resource move <sap-resource> command to move the <sap-resource> group to the specified node.
[root@node ~]# pcs resource move <sap-resource>The pcs resource move command adds a constraint to the resource to prevent it from running on the node where it currently runs.
How to list and delete constraints is described later in this chapter.
The move command without a target creates a Disable rule for the original node.
The move command with a target creates an Enable rule for the original node.
For SAP HANA resource, use the following command for the move:
[root@node ~]# crm_resource --move --resource <sap-hana-resource-clone>Note: Wait for the secondary SAP HANA resource to fully promote to primary before performing any other task.
The crm_resource command is used only in RHEL 8 and earlier versions when running SAP HANA in a Pacemaker cluster.
Starting from RHEL 9, only pcs resource move is available.
[root@node ~]# pcs resource clear <sap-hana-resource-clone>
Removing constraint: .....Note: Wait for the former primary to fully start and to get registered as secondary.
As a cluster administrator, you can temporarily prevent the migration of a resource to a specific cluster node.
The pcs resource ban <resourcename> command prohibits the resource from running on the node where it currently runs.
Optionally, you can add a particular node as a parameter on the command line to restrict the cluster resource from migrating to the specified node.
To prevent the <sap-resource> group from running on <node-name>, execute the following command:
[root@node ~]# pcs resource ban <sap-resource> <node-name>Both pcs resource move and pcs resource ban create a temporary constraint rule on the cluster.
Constraints are used, among other reasons, to influence which resources can run where.
For a cluster administrator, it is important to understand the behavior of the configured services on the cluster.
The pcs constraint list command gives an administrator an overview of the currently configured constraints in the cluster.
In the following example, the sap_resource_group resource group is banned from running on the other node.
[root@node ~]# pcs constraint list --full
Location Constraints:
Resource: sap_resource_group
....As a cluster administrator, you can remove the temporary restrictions for a resource with the pcs resource clear <resourcename/resource-group-name> command.
To clear the ban restriction for the sapresource resource group on node2, execute the following command:
[root@node ~]# pcs resource clear <sap-resource>When handling SAP resources, it is highly recommended not to move the SAP resources immediately after the previous move. SAP resources involve replication; after the initial move, it takes time for the former primary node to register itself as secondary based on the resource configuration. For this outcome to happen successfully, the former primary node must rejoin the cluster, synchronize the data again as per the set configuration, and then declare itself as registered as the new secondary.
Pacemaker assumes that resource relocation has no cost by default. In other words, if a higher-score node becomes available, then Pacemaker relocates the resource to that node. It can cause extra unplanned downtime for that resource, especially if it is expensive to relocate. (For example, the resource might take significant time to relocate.)
A default resource stickiness establishes a score for the node on which a resource is currently running. For example, assume that the resource stickiness is set to 1000, and the resource's preferred node has a location constraint with a score of 500. On resource start, it runs on the preferred node. If the preferred node crashes, then the resource moves to one of the other nodes, and that node gets a new score of 1000. When the preferred node comes back, it has a score of only 500, and the resource does not automatically relocate to the preferred node. The cluster administrator must manually relocate the resource to the preferred node at a convenient time, perhaps during a planned outage window. To set a default resource stickiness of 1000 for all resources, run this command:
[root@node ~]# pcs resource defaults update resource-stickiness=1000To view the current resource defaults, run this command:
[root@node ~]# pcs resource defaultsTo clear the resource stickiness setting, run this command:
[root@node ~]# pcs resource defaults update resource-stickiness=Resource stickiness for a resource group is calculated based on how many resources are running in that group. If a resource group has five active resources and resource stickiness is 1000, then the resource group has an effective score of 5000.
This concludes the chapter on verifying the environment configuration.