After completing this section, you should be able to describe, plan, and set up the SAP HANA Scale-up Resource Agent.
The SAP HANA scale-up resource agent handles and automates SAP HANA system replication. The agent consists of the following resources:
SAPHana
SAPHanaTopology
The resource agent is part of the resource-agents-sap-hana package.
The SAPHanaTopology resource runs on every node.
It gathers information about the running SAP HANA instances.
The SAPHana resource handles the replicated SAP HANA database instances.
If the primary site fails, then the resource automates failover. The former secondary database then becomes the new primary database. You can choose whether the failed site is automatically registered as a secondary site.
To do so, you must set the AUTOMATED_REGISTER parameter.
For more information about setting up the resource, display online help with the following commands:
[root]#pcs resource describe SAPHanaTopology[root]#pcs resource describe SAPHana
More details are also described in Automating SAP HANA Scale-Up System Replication Using the RHEL HA Add-On, https://access.redhat.com/articles/3004101
To configure the SAP HANA scale-up resource agent, note the following prerequisites:
Base installation of a two-node Pacemaker cluster.
Installed SAP HANA database on two nodes with the same SID and InstanceNumber values.
Configured SAP HANA system replication between primary and secondary nodes.
Installed resource-agent-sap-hana package on all cluster nodes of the same version.
Before you start, run the following steps to verify the configuration:
[root]#yum info resource-agents-sap-hana[root]#pcs cluster status --full[sidadm]%hdbnsutil -sr_state
To verify whether the resource agent is installed, use the following command:
[root]# yum info resource-agents-sap-hana --installed
Updating Subscription Management repositories.
Installed Packages
Name : resource-agents-sap-hana
Epoch : 1
Version : 0.154.0
Release : 2.el8_4.1
Architecture : noarch
Size : 153 k
Source : resource-agents-sap-hana-0.154.0-2.el8_4.1.src.rpm
Repository : @System
From repo : rhel-8-for-x86_64-sap-solutions-rpms
Summary : SAP HANA cluster resource agents
URL : https://github.com/SUSE/SAPHanaSR
License : GPLv2+
Description : The SAP HANA resource agents interface with Pacemaker to allow
: SAP instances to be managed in a cluster environment.Verify the status of the cluster:
[root]# pcs status
...
Node List:
* Online: [ hana01 hana02 ]
...
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabledVerify the status of SAP HANA system replication:
[sidadm]% hdbnsutil -sr_stateWhether you configure SAP HANA system replication with Ansible or manually, you must specify these parameters:
| Name | Description | Example |
|---|---|---|
| node_name | Name of the node | hana01, hana02 |
| hana_site | Location name of the HANA instance | DC1, DC2 |
| node_role | Role of the node | primary or secondary |
| replicationMode | Replication Mode for HANA System Replication, https://bit.ly/3QIvfxi | sync or syncmem |
| operationMode | Operation Mode for HANA System Replication, https://bit.ly/3QvWrja | delta_datashipping or logreplay or logreplay_readaccess |
| sap_hana_vip1 | Virtual IP address for the SAP HANA primary node | 192.168.1.111 |
| sap_hana_vip2 | Virtual IP address for the SAP HANA secondary read-enabled node | 192.168.1.112 |
| sap_hana_sid | SID system identifier | RH1 |
| sap_hana_instance_number | Instance number | 00 |
| PREFER_SITE_TAKEOVER | Will the resource agent prefer to switch over to the secondary instance instead of restarting the primary locally? | true or false or never |
| DUPLICATE_PRIMARY_TIMEOUT | Period before automatic failover is possible | Default 7200 seconds |
| AUTOMATED_REGISTER | Whether the former primary should automatically be registered as secondary | Default false |
This section includes various command examples to set up the environment.
Without using Ansible, you can follow these installation steps, which are also described in Configuring SAP HANA in a Pacemaker Cluster, https://access.redhat.com/articles/3004101#configuring-sap-hana-in-pacemaker-cluster
The following steps are needed:
Activate the srConnectionChanged() hook.
Configure general cluster properties.
Create the SAPHanaTopology resource.
Create the SAPHana resource.
Create a virtual IP address resource.
Create constraints.
Optional: Add a secondary virtual IP address resource.
Testing
Detailed installation steps follow:
Stop the cluster and copy the srConnectionChanged() hook file as shown:
[root]#pcs cluster stop --all[root]#mkdir -p /hana/shared/myHooks[root]#cp /usr/share/SAPHanaSR/srHook/SAPHanaSR.py /hana/shared/myHooks[root]#chown -R rh1adm:sapsys /hana/shared/myHooks
Note:
rhiadm is the example <sid>adm user here.
Update global.ini file:
[root]# vim /hana/shared/RH1/global/hdb/custom/config/global.ini
[ha_dr_provider_SAPHanaSR]
provider = SAPHanaSR
path = /hana/shared/myHooks
execution_order = 1
[trace]
ha_dr_saphanasr = infoCreate the sudoers configuration:
[root]# visudo -f /etc/sudoers.d/20-saphana
Cmnd_Alias DC1_SOK = /usr/sbin/crm_attribute -n hana_rh1_site_srHook_DC1 -v SOK -t crm_config -s SAPHanaSR
Cmnd_Alias DC1_SFAIL = /usr/sbin/crm_attribute -n hana_rh1_site_srHook_DC1 -v SFAIL -t crm_config -s SAPHanaSR
Cmnd_Alias DC2_SOK = /usr/sbin/crm_attribute -n hana_rh1_site_srHook_DC2 -v SOK -t crm_config -s SAPHanaSR
Cmnd_Alias DC2_SFAIL = /usr/sbin/crm_attribute -n hana_rh1_site_srHook_DC2 -v SFAIL -t crm_config -s SAPHanaSR
rh1adm ALL=(ALL) NOPASSWD: DC1_SOK, DC1_SFAIL, DC2_SOK, DC2_SFAIL
Defaults!DC1_SOK, DC1_SFAIL, DC2_SOK, DC2_SFAIL !requirettyConfigure general cluster properties:
[root]#pcs resource defaults update resource-stickiness=1000[root]#pcs resource defaults update migration-threshold=5000
Create the SAPHanaTopology resource:
[root]#pcs resource create SAPHanaTopology_RH1_00 SAPHanaTopology SID=RH1 \>InstanceNumber=00 \>op start timeout=600 \>op stop timeout=300 \>op monitor interval=10 timeout=600 \>clone clone-max=2 clone-node-max=1 interleave=true
Create the SAPHana resource:
[root]#pcs resource create SAPHana_RH1_01 SAPHana SID=RH1 \>InstanceNumber=01 PREFER_SITE_TAKEOVER=true \>DUPLICATE_PRIMARY_TIMEOUT=7200 AUTOMATED_REGISTER=true \>op start timeout=3600 \>op stop timeout=3600 \>op monitor interval=61 role="Slave" timeout=700 \>op monitor interval=59 role="Master" timeout=700 \>op promote timeout=3600 \>op demote timeout=3600 \>promotable notify=true clone-max=2 clone-node-max=1 interleave=true
Create a virtual IP address resource:
[root]# pcs resource create vip_RH1_00 IPaddr2 ip="192.168.0.15"Create constraints:
[root]#pcs constraint order SAPHanaTopology_RH1_00-clone \>then SAPHana_RH1_00-master symmetrical=false[root]#pcs constraint colocation add vip_RH1_00 with master \>SAPHana_RH1_00-master 2000
Optional: Add a secondary virtual IP address resource
[root]#pcs constraint location vip2_RH1_00 rule score=INFINITY \>hana_rh1_sync_state eq SOK and hana_rh1_roles eq \>4:S:master1:master:worker:master[root]#pcs constraint location vip2_RH1_00 rule score=2000 \>hana_rh1_sync_state eq PRIM and hana_rh1_roles eq \>4:P:master1:master:worker:master
HANA system replication:
[rh1adm]#python /usr/sap/RH1/HDB02/exe/python_support/systemReplicationStatus.py| Host | Port | Service Name | Volume ID | Site ID | Site Name | Secondary | | | | | | | | Host | | ----- | ----- | ------------ | --------- | ------- | --------- | --------- | | node1 | 30201 | nameserver | 1 | 1 | DC1 | node2 | | node1 | 30207 | xsengine | 2 | 1 | DC1 | node2 | | node1 | 30203 | indexserver | 3 | 1 | DC1 | node2 | Secondary | Secondary | Secondary | Secondary | Replication | Replication | Port | Site ID | Site Name | Active Status | Mode | Status | --------- | --------- | --------- | ------------- | ----------- | ----------- | 30201 | 2 | DC2 | YES | SYNCMEM | ACTIVE | 30207 | 2 | DC2 | YES | SYNCMEM | ACTIVE | 30203 | 2 | DC2 | YES | SYNCMEM | ACTIVE | status system replication site "2": ACTIVE overall system replication status: ACTIVE Local System Replication State ~~~~~~~~~~ mode: PRIMARY site id: 1 site name: DC1 [root]#/usr/sap/RH1/HDB02/exe/hdbuserstore listDATA FILE : /root/.hdb/node1/SSFS_HDB.DAT KEY FILE : /root/.hdb/node1/SSFS_HDB.KEY KEY SAPHANARH1SR ENV : localhost:30215 USER: rhelhasync
Testing the SrHook:
# To check if hook scripts are working [rh1adm]#cdtrace[rh1adm]#awk '/ha_dr_SAPHanaSR.*crm_attribute/ { printf "%s %s %s %s\n",$2,$3,$5,$16 }' nameserver_*2018-05-04 12:34:04.476445 ha_dr_SAPHanaSR SFAIL 2018-05-04 12:53:06.316973 ha_dr_SAPHanaSR SOK [rh1adm]#grep ha_dr_ *
Verify the SAPHanaTopology resource:
[root]# pcs resource show SAPHanaTopology_RH1_00-clone
Clone: SAPHanaTopology_RH1_00-clone
Meta Attrs: clone-max=2 clone-node-max=1 interleave=true
Resource: SAPHanaTopology_RH1_00 (class=ocf provider=heartbeat type=SAPHanaTopology)
Attributes: SID=RH1 InstanceNumber=02
Operations: start interval=0s timeout=600 (SAPHanaTopology_RH1_00-start-interval-0s)
stop interval=0s timeout=300 (SAPHanaTopology_RH1_00-stop-interval-0s)
monitor interval=10 timeout=600 (SAPHanaTopology_RH1_00-monitor-interval-10s)Verify the SAPHana resource:
[root]# pcs resource config SAPHana_RH1_00
Clone: SAPHana_RH1_00-clone
Meta Attrs: clone-max=2 clone-node-max=1 interleave=true notify=true promotable=true
Resource: SAPHana_RH1_00 (class=ocf provider=heartbeat type=SAPHana)
Attributes: AUTOMATED_REGISTER=true DUPLICATE_PRIMARY_TIMEOUT=180 InstanceNumber=02 PREFER_SITE_TAKEOVER=true SID=RH1
Operations: demote interval=0s timeout=3600 (SAPHana_RH1_00-demote-interval-0s)
methods interval=0s timeout=5 (SAPHana_RH1_00-methods-interval-0s)
monitor interval=61 role=Slave timeout=700 (SAPHana_RH1_00-monitor-interval-61)
monitor interval=59 role=Master timeout=700 (SAPHana_RH1_00-monitor-interval-59)
promote interval=0s timeout=3600 (SAPHana_RH1_00-promote-interval-0s)
start interval=0s timeout=3600 (SAPHana_RH1_00-start-interval-0s)
stop interval=0s timeout=3600 (SAPHana_RH1_00-stop-interval-0s)Verify the cluster:
[root]# crm_mon -A1
...
Node Attributes:
* Node node1:
+ hana_rh1_clone_state : PROMOTED
+ hana_rh1_op_mode : logreplay
+ hana_rh1_remoteHost : node2
+ hana_rh1_roles : 4:P:master1:master:worker:master
+ hana_rh1_site : DC1
+ hana_rh1_srmode : sync
+ hana_rh1_sync_state : PRIM
+ hana_rh1_version : 2.00.***.
+ hana_rh1_vhost : node1
+ lpa_rh1_lpt : 1659691427
+ master-SAPHana_RH1_00 : 150
* Node node2:
+ hana_rh1_clone_state : DEMOTED
+ hana_rh1_op_mode : logreplay
+ hana_rh1_remoteHost : node1
+ hana_rh1_roles : 4:S:master1:master:worker:master
+ hana_rh1_site : DC2
+ hana_rh1_srmode : sync
+ hana_rh1_sync_state : SOK
+ hana_rh1_version : 2.00.***.
+ hana_rh1_vhost : node2
+ lpa_rh1_lpt : 30
+ master-SAPHana_RH1_00 : 100
...Verify VIP1:
[root]# pcs resource show vip_RH1_00
Resource: vip_RH1_00 (class=ocf provider=heartbeat type=IPaddr2)
Attributes: ip=192.168.0.15
Operations: start interval=0s timeout=20s (vip_RH1_00-start-interval-0s)
stop interval=0s timeout=20s (vip_RH1_00-stop-interval-0s)
monitor interval=10s timeout=20s (vip_RH1_00-monitor-interval-10s)This concludes the section for the SAP HANA scale-up resource agent.