After completing this section, you should be able to describe, plan, and set up the SAP HANA Scale-out Resource Agent and how it differs from the Scale-up Resource Agent.
The SAP HANA scale-out resource agent handles and automates SAP HANA scale-out system replication. The agent consists of the following resources:
SAPHanaController
SAPHanaTopology
The resource agent is part of the resource-agents-sap-hana-scaleout package.
The SAPHanaTopology resource runs on every node and gathers information about the running SAP HANA instances.
The SAPHanaController resource handles the replicated SAP HANA database instances.
The behavior is similar to the SAP HANA scale-up resource agent.
If the primary site fails, then the resource automates failover. The former secondary database then becomes the new primary database. You can choose whether the failed site is automatically registered as a secondary site.
To do so, you must set the AUTOMATED_REGISTER parameter.
Differently from SAP HANA scale-up, in SAP HANA scale-out the database is distributed across multiple nodes. Both sides of the cluster must have the same number of database nodes.
For more information about setting up the resource, display online help with the following commands:
[root]#pcs resource describe SAPHanaTopology[root]#pcs resource describe SAPHanaController
More details are also described in Red Hat Enterprise Linux HA Solution for SAP HANA Scale-Out and System Replication, https://access.redhat.com/sites/default/files/attachments/v8_ha_solution_for_sap_hana_scale_out_system_replication_1.pdf
To configure the SAP HANA scale-out resource agent, note the following prerequisites:
Base installation of a SAP HANA scale-out environment with two sites.
This setup generally comprises more than four SAP HANA nodes with the same SID and InstanceNumber values.
An equal number of nodes are divided across (conventionally, two) sites or data centers, with one site being the primary and the other secondary.
SAP HANA system replication is then configured between the stated primary and secondary sites.
The same version of the resource-agents-sap-hana-scaleout package is installed on all the cluster nodes that run SAP HANA.
The same version of the Pacemaker cluster packages are installed on all nodes.
Pacemaker cluster is configured on all the nodes that are running a SAP HANA database as cluster nodes, plus an additional node as majority maker to ensure a majority in terms of nodes or entire site failure.
Do not confuse scale-out clusters, where all the nodes are part of the same single cluster, with multi-site clusters, which are multiple clusters of the same type. For more information about multi-site Pacemaker clusters, see the following link: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/configuring_and_managing_high_availability_clusters/index#assembly_configuring-multisite-cluster-configuring-and-managing-high-availability-clusters
Before you start, run the following steps to verify the configuration:
[root]#yum info resource-agents-sap-hana-scaleout[root]#pcs cluster status --full[sidadm]%hdbnsutil -sr_state
To verify whether the resource agent is installed, use the following command:
[root]# yum info resource-agents-sap-hana-scaleout --installed
Updating Subscription Management repositories.
Installed Packages
Name : resource-agents-sap-hana-scaleout
Epoch : 1
Version : 0.180.0
Release : 0.el8_4.1
Architecture : noarch
Size : 332 k
Source : resource-agents-sap-hana-scaleout-0.180.0-0.el8_4.1.src.rpm
Repository : @System
From repo : @commandline
Summary : SAP HANA Scale-Out cluster resource agents
URL : https://github.com/SUSE/SAPHanaSR-ScaleOut
License : GPLv2+
Description : The SAP HANA Scale-Out resource agents interface with Pacemaker
: to allow SAP HANA Scale-Out instances to be managed in a cluster
: environment.Verify the status of the cluster:
[root]# pcs status
...
Node List:
* Online: [ hana01 hana02 ]
...
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabledVerify the status of SAP HANA system replication:
[sidadm]% hdbnsutil -sr_stateWhether you configure SAP HANA system replication with Ansible or manually, you must specify these parameters:
| Name | Description | Example |
|---|---|---|
| node_name | Name of the node | hana01, hana02 |
| hana_site | Location name of the HANA instance | DC1, DC2 |
| node_role | Role of the node | primary or secondary |
| replicationMode | Replication Mode for HANA System Replication, https://bit.ly/3QIvfxi | sync or syncmem |
| operationMode | Operation Mode for HANA System Replication, https://bit.ly/3QvWrja | delta_datashipping or logreplay or logreplay_readaccess |
| sap_hana_vip1 | Virtual IP address for the SAP HANA primary node | 192.168.1.111 |
| sap_hana_vip2 | Virtual IP address for the SAP HANA secondary read-enabled node | 192.168.1.112 |
| sap_hana_sid | SID system identifier | RH1 |
| sap_hana_instance_number | Instance number | 00 |
| PREFER_SITE_TAKEOVER | Will the resource agent prefer to switch over to the secondary instance instead of restarting the primary locally? | true or false or never |
| DUPLICATE_PRIMARY_TIMEOUT | Period before automatic failover is possible | Default 7200 |
| AUTOMATED_REGISTER | Whether the former primary should automatically be registered as secondary | Default false |
This section includes various command examples to set up the environment.
Without using Ansible, you can follow these installation steps, which are also described in Red Hat Enterprise Linux HA Solution for SAP HANA Scale-out and System Replication, https://access.redhat.com/solutions/4386601
The following steps are needed:
Shared /hana/shared directories per site for scale-out installation.
Install SAP HANA on all the relevant nodes.
You can add more SAP HANA nodes to the environment by using the add_hosts feature with the /hana/shared/RH1/hdblcm/hdblcm command.
Install SAP HANA on the other site, by using the same SID and instance number as the first site.
Authorize nodes with the pcs cluster auth command.
Create the cluster.
Install the resource agent.
Install srHook.
Create the SAPHanaTopology resource.
Create the SAPHanaController resource.
Create a virtual IP address resource.
Create constraints.
Optional: Add a secondary virtual IP address resource.
Testing
Detailed installation steps follow.
Setting up a scale-out configuration is similar to scale-up, with the following differences:
More than four HANA nodes within the same cluster
Additional shared mount points
Additional majoritymaker node
The cluster configuration and resource definition are similar to scale-up, and are documented here: Red Hat Enterprise Linux HA Solution for SAP HANA Scale-out and System Replication, https://access.redhat.com/sites/default/files/attachments/v10_ha_solution_for_sap_hana_scale_out_system_replication_0.pdf
The database is running on multiple nodes.
These nodes need shared mount points.
Assuming that the databases are running in two data centers, DC1 and DC2, the following nodes are used in this example:
dc1hana01
dc1hana02
dc1hana03
dc1hana04
dc2hana01
dc2hana02
dc2hana03
dc2hana04
All the data centers must have the same number of nodes.
To ensure a majority in terms of nodes or entire site failure, an extra node is added to the configuration, and is named as the majoritymaker node.
The additional majoritymaker node does not run any SAP services.
Therefore, it can be smaller than the database nodes in terms of CPU and RAM resources.
Constraints are defined to prevent this node from taking over any SAP instances.
The following requirements apply to all nodes:
Base RHEL is installed.
Subscription for RHEL cluster is registered.
NFS services are installed and enabled.
RHEL HA Pacemaker cluster packages are installed.
The resource-agent-sap-hana-scaleout package for managing SAP HANA scale-out is installed, as mentioned earlier.
The respective /hana/shared NFS mount must be mounted on the HANA nodes of each site.
For example, a separate share per site would look as follows:
HANA_SHARED_DC1
HANA_SHARED_DC2
Installing the first database node is identical to the previously described method.
The difference is installing the additional nodes.
Instead of using hdblcm from the software distribution, you must use hdblcm, which is stored in the /hana/shared mount point.
For example, use /hana/shared/RH1/hdblcm/hdblcm to install the additional nodes.
hdblcm in /hana/shared is available after the installation of the first database instance is complete.
The installation, to use the same SID and instance number, must be repeated on each site. Use one of the following implementation methods:
Either: Install the database on the first node by using hdblcm from the distribution directory, and then use the option to add hosts: add_hosts.
Or, after the installation is done, use hdblcm in the /hana/shared directory to add hosts to the first database.
Some differences exist between the cluster installation of a scale-out versus a scale-up environment:
More than three nodes in a single cluster.
Possibly more fencing devices and their respective STONITH resources as per types of nodes.
Different software package: resource-agents-sap-hana-scaleout
Different resource agent for managing SAPHana: SAPHanaController
Additional constraints to prevent SAP services running on the majoritymaker node.
The sudoers and srHook configuration is required on all HANA nodes of each site.
Detailed installation steps follow:
The /etc/sudoers.d/20-saphana file on all nodes must have the following entries:
# SAPHanaSR-ScaleOut needs for srHook Cmnd_Alias SOK = /usr/sbin/crm_attribute -n hana_rh1_glob_srHook -v SOK -t crm_config -s SAPHanaSR Cmnd_Alias SFAIL = /usr/sbin/crm_attribute -n hana_rh1_glob_srHook -v SFAIL -t crm_config -s SAPHanaSR rh1adm ALL=(ALL) NOPASSWD: SOK, SFAIL Defaults!SOK, SFAIL !requiretty
Start the SAP HANA database on all nodes in each data center by using the following command:
[rh1adm]# sapcontrol -nr 00 -function StartSystemConfigure general cluster properties:
[root]#pcs resource defaults update resource-stickiness=1000[root]#pcs resource defaults update migration-threshold=5000
Create the SAPHanaTopology resource:
[root]#pcs resource create rsc_SAPHanaTopology_RH1_HDB00 SAPHanaTopology \>SID=RH1 InstanceNumber=00 op methods interval=0s timeout=5 op monitor \>interval=10 timeout=600[root]#pcs resource clone rsc_SAPHanaTopology_RH1_HDB00 clone-node-max=1 \>interleave=true
Create the SAPHanaController resource:
[root]#pcs resource create rsc_SAPHana_RH1_HDB00 SAPHanaController \>SID=RH1 InstanceNumber=00 PREFER_SITE_TAKEOVER=true \>DUPLICATE_PRIMARY_TIMEOUT=7200 \>AUTOMATED_REGISTER=true op demote interval=0s timeout=320 op methods \>interval=0s timeout=5 op monitor interval=59 role="Master" \>timeout=700 op monitor interval=61 role="Slave" timeout=700 \>op promote interval=0 timeout=3600 op start interval=0 \>timeout=3600 op stop interval=0 timeout=3600[root]#pcs resource promotable rsc_SAPHana_RH1_HDB00 promoted-max=1 \>clone-node-max=1 interleave=true
Create a virtual IP address resource:
[root]# pcs resource create rsc_ip_SAPHana_RH1_HDB00 IPaddr2 ip="192.168.0.15"Create constraints:
[root]#pcs constraint order SAPHanaTopology_RH1_00-clone then \>SAPHana_RH1_00-master symmetrical=false[root]#pcs constraint colocation add vip_RH1_00 with master \>SAPHana_RH1_00-master 2000
Constraint to start SAPHanaTopology before SAPHanaController
[root]#pcs constraint order start rsc_SAPHanaTopology_RH1_HDB10-clone then \>start rsc_SAPHana_RH1_HDB10-clone
Constraint to avoid starting SAPHanaTopology on majoritymaker
[root]#pcs constraint location rsc_SAPHanaTopology_RH1_HDB00-clone \>avoids majoritymaker
Colocate the IPaddr2 resource with the primary SAPHana resource:
[root]#pcs constraint colocation add rsc_ip_SAPHana_RH1_HDB00 with master \>rsc_SAPHana_RH1_HDB00-clone
List all SAP HANA database instances.
Verify the presence of the GREEN keyword in the output, to show that the database instance started on the listed node.
[rh1adm]# /usr/sap/hostctrl/exe/sapcontrol -nr 00 -function GetSystemInstanceList
10.04.2019 08:38:21
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features,
dispstatus
dc1hana01, 00, 50013, 50014, 0.3, HDB|HDB_WORKER, GREEN
dc1hana02, 00, 50013, 50014, 0.3, HDB|HDB_WORKER, GREEN
dc1hana03, 00, 50013, 50014, 0.3, HDB|HDB_WORKER, GREEN
dc1hana04, 00, 50013, 50014, 0.3, HDB|HDB_STANDBY, GREENVerify the landscape host configuration:
[rh1adm]# HDBSettings.sh landscapeHostConfiguration.py
ok
rh1adm@dc1hana01:/usr/sap/RH1/HDB00> HDBSettings.sh
landscapeHostConfiguration.py
| Host | Host | Host | Failover | Remove | Storage |
Storage | Failover | Failover | NameServer | NameServer |
IndexServer | IndexServer | Host | Host | Worker | Worker |
| | Active | Status | Status | Status | Config |
Actual | Config | Actual | Config | Actual | Config
| Actual | Config | Actual | Config | Actual |
| | | | | | Partition |
Partition | Group | Group | Role | Role | Role
| Role | Roles | Roles | Groups | Groups |
| --------- | ------ | ------ | -------- | ------ | --------- |
--------- | -------- | -------- | ---------- | ---------- |
----------- | ----------- | ------- | ------- | ------- | ------- |
| dc1hana01 | yes | ok | | | 1 |
1 | default | default | master 1 | master | worker |
master | worker | worker | default | default |
| dc1hana02 | yes | ok | | | 2 |
2 | default | default | master 3 | slave | worker |
slave | worker | worker | default | default |
| dc1hana03 | yes | ok | | | 2 |
2 | default | default | master 3 | slave | worker |
slave | worker | worker | default | default |
| dc1hana04 | yes | ignore | | | 0 |
0 | default | default | master 2 | slave | standby |
standby | standby | standby | default | - |Verify the HANA databases process information:
[rh1adm]# HDB info
USER PID PPID %CPU VSZ RSS COMMAND
rh1adm 31321 31320 0.0 116200 2824 -bash
rh1adm 32254 31321 0.0 113304 1680 \_ /bin/sh
/usr/sap/RH1/HDB00/HDB info
rh1adm 32286 32254 0.0 155356 1868 \_ ps fx
-U rh1adm -o user:8,pid:8,ppid:8,pcpu:5,vsz:10,rss:10,args
rh1adm 27853 1 0.0 23916 1780 sapstart
pf=/hana/shared/RH1/profile/RH1_HDB00_dc1hana01
rh1adm 27863 27853 0.0 262272 32368 \_
/usr/sap/RH1/HDB00/dc1hana01/trace/hdb.sapRH1_HDB00 -d -nw -f
/usr/sap/RH1/HDB00/dc1hana01/daemon.ini
pf=/usr/sap/RH1/SYS/profile/RH1_HDB00_dc1hana01
rh1adm 27879 27863 53.0 9919108 6193868 \_
hdbnameserver
rh1adm 28186 27863 0.7 1860416 268304 \_
hdbcompileserver
rh1adm 28188 27863 65.8 3481068 1834440 \_
hdbpreprocessor
rh1adm 28228 27863 48.2 9431440 6481212 \_
hdbindexserver -port 30003
rh1adm 28231 27863 2.1 3064008 930796 \_
hdbxsengine -port 30007
rh1adm 28764 27863 1.1 2162344 302344 \_
hdbwebdispatcher
rh1adm 27763 1 0.2 502424 23376
/usr/sap/RH1/HDB00/exe/sapstartsrv
pf=/hana/shared/RH1/profile/RH1_HDB00_dc1hana01 -D -u rh1admHANA system replication:
[rh1adm]#python /usr/sap/RH1/HDB02/exe/python_support/systemReplicationStatus.py| Host | Port | Service Name | Volume ID | Site ID | Site Name | Secondary | | | | | | | | Host | | ----- | ----- | ------------ | --------- | ------- | --------- | --------- | | node1 | 30201 | nameserver | 1 | 1 | DC1 | node2 | | node1 | 30207 | xsengine | 2 | 1 | DC1 | node2 | | node1 | 30203 | indexserver | 3 | 1 | DC1 | node2 | Secondary | Secondary | Secondary | Secondary | Replication | Replication | Port | Site ID | Site Name | Active Status | Mode | Status | --------- | --------- | --------- | ------------- | ----------- | ----------- | 30201 | 2 | DC2 | YES | SYNCMEM | ACTIVE | 30207 | 2 | DC2 | YES | SYNCMEM | ACTIVE | 30203 | 2 | DC2 | YES | SYNCMEM | ACTIVE | status system replication site "2": ACTIVE overall system replication status: ACTIVE Local System Replication State ~~~~~~~~~~ mode: PRIMARY site id: 1 site name: DC1 [root]#/usr/sap/RH1/HDB02/exe/hdbuserstore listDATA FILE : /root/.hdb/node1/SSFS_HDB.DAT KEY FILE : /root/.hdb/node1/SSFS_HDB.KEY KEY SAPHANARH1SR ENV : localhost:30215 USER: rhelhasync
Test the working of SrConnectionChangedHook:
[rh1adm]#cdtrace[rh1adm]#awk '/ha_dr_SAPHanaSR.*crm_attribute/ \>{ printf "%s %s %s %s\n",$2,$3,$5,$16 }' nameserver_*2018-05-04 12:34:04.476445 ha_dr_SAPHanaSR SFAIL 2018-05-04 12:53:06.316973 ha_dr_SAPHanaSR SOK [rh1adm]#grep ha_dr_ *
Verify the SAPHanaTopology resource:
[root]# pcs resource config SAPHanaTopology_RH1_00-clone
Clone: SAPHanaTopology_RH1_00-clone
Meta Attrs: clone-max=2 clone-node-max=1 interleave=true
Resource: SAPHanaTopology_RH1_00 (class=ocf provider=heartbeat type=SAPHanaTopology)
Attributes: SID=RH1 InstanceNumber=02
Operations: start interval=0s timeout=600 (SAPHanaTopology_RH1_00-start-interval-0s)
stop interval=0s timeout=300 (SAPHanaTopology_RH1_00-stop-interval-0s)
monitor interval=10 timeout=600 (SAPHanaTopology_RH1_00-monitor-interval-10s)Verify the SAPHana resource:
[root]# pcs resource config SAPHana_RH1_00
Clone: SAPHana_RH1_00-clone
Meta Attrs: clone-max=2 clone-node-max=1 interleave=true notify=true promotable=true
Resource: SAPHana_RH1_00 (class=ocf provider=heartbeat type=SAPHana)
Attributes: AUTOMATED_REGISTER=true DUPLICATE_PRIMARY_TIMEOUT=180 InstanceNumber=02 PREFER_SITE_TAKEOVER=true SID=RH1
Operations: demote interval=0s timeout=3600 (SAPHana_RH1_00-demote-interval-0s)
methods interval=0s timeout=5 (SAPHana_RH1_00-methods-interval-0s)
monitor interval=61 role=Slave timeout=700 (SAPHana_RH1_00-monitor-interval-61)
monitor interval=59 role=Master timeout=700 (SAPHana_RH1_00-monitor-interval-59)
promote interval=0s timeout=3600 (SAPHana_RH1_00-promote-interval-0s)
start interval=0s timeout=3600 (SAPHana_RH1_00-start-interval-0s)
stop interval=0s timeout=3600 (SAPHana_RH1_00-stop-interval-0s)Verify the cluster status:
[root]# pcs status --full
Cluster name: hanascaleoutsr
Stack: corosync
Current DC: majoritymaker (9) (version 1.1.18-11.el7_5.4-2b07d5c5a9)
- partition with quorum
Last updated: Tue Mar 26 16:34:22 2019
Last change: Tue Mar 26 16:34:03 2019 by root via crm_attribute on
dc2hana01
9 nodes configured
20 resources configured
Online: [ dc1hana01 (1) dc1hana02 (2) dc1hana03 (3) dc1hana04 (4)
dc2hana01 (5) dc2hana02 (6) dc2hana03 (7) dc2hana04 (8) majoritymaker
(9) ]
Full list of resources:
Clone Set: rsc_SAPHanaTopology_RH1_HDB10-clone
[rsc_SAPHanaTopology_RH1_HDB10]
rsc_SAPHanaTopology_RH1_HDB10
(ocf::heartbeat:SAPHanaTopology): Started dc2hana02
rsc_SAPHanaTopology_RH1_HDB10
(ocf::heartbeat:SAPHanaTopology): Started dc1hana03
rsc_SAPHanaTopology_RH1_HDB10
(ocf::heartbeat:SAPHanaTopology): Started dc2hana04
rsc_SAPHanaTopology_RH1_HDB10
(ocf::heartbeat:SAPHanaTopology): Started dc2hana03
rsc_SAPHanaTopology_RH1_HDB10
(ocf::heartbeat:SAPHanaTopology): Started dc1hana04
rsc_SAPHanaTopology_RH1_HDB10
(ocf::heartbeat:SAPHanaTopology): Started dc1hana01
rsc_SAPHanaTopology_RH1_HDB10
(ocf::heartbeat:SAPHanaTopology): Started dc1hana02
rsc_SAPHanaTopology_RH1_HDB10
(ocf::heartbeat:SAPHanaTopology): Started dc2hana01
rsc_SAPHanaTopology_RH1_HDB10
(ocf::heartbeat:SAPHanaTopology): Stopped
Started: [ dc1hana01 dc1hana02 dc1hana03 dc1hana04 dc2hana01
dc2hana02 dc2hana03 dc2hana04 ]
Stopped: [ majoritymaker ]
Master/Slave Set: msl_rsc_SAPHana_RH1_HDB10 [rsc_SAPHana_RH1_HDB10]
rsc_SAPHana_RH1_HDB10 (ocf::heartbeat:SAPHanaController):
Slave dc2hana02
rsc_SAPHana_RH1_HDB10 (ocf::heartbeat:SAPHanaController):
Slave dc1hana03
rsc_SAPHana_RH1_HDB10 (ocf::heartbeat:SAPHanaController):
Slave dc2hana04
rsc_SAPHana_RH1_HDB10 (ocf::heartbeat:SAPHanaController):
Slave dc2hana03
rsc_SAPHana_RH1_HDB10 (ocf::heartbeat:SAPHanaController):
Slave dc1hana04
rsc_SAPHana_RH1_HDB10 (ocf::heartbeat:SAPHanaController):
Slave dc1hana01
rsc_SAPHana_RH1_HDB10 (ocf::heartbeat:SAPHanaController):
Slave dc1hana02
rsc_SAPHana_RH1_HDB10 (ocf::heartbeat:SAPHanaController):
Master dc2hana01
rsc_SAPHana_RH1_HDB10 (ocf::heartbeat:SAPHanaController):
Stopped
Masters: [ dc2hana01 ]
Slaves: [ dc1hana01 dc1hana02 dc1hana03 dc1hana04 dc2hana02
dc2hana03 dc2hana04 ]
Stopped: [ majoritymaker ]
rsc_ip_SAPHana_RH1_HDB10 (ocf::heartbeat:IPaddr2):
Started dc2hana01
fencing (stonith:fence_rhevm): Started majoritymaker
Node Attributes:
* Node dc1hana01 (1):
+ hana_rh1_clone_state : DEMOTED
+ hana_rh1_roles :
master2:master:worker:master
+ hana_rh1_site : DC1
+ master-rsc_SAPHana_RH1_HDB10 : 100
* Node dc1hana02 (2):
+ hana_rh1_clone_state : DEMOTED
+ hana_rh1_roles : slave:slave:worker:slave
+ hana_rh1_site : DC1
+ master-rsc_SAPHana_RH1_HDB10 : -12200
* Node dc1hana03 (3):
+ hana_rh1_clone_state : DEMOTED
+ hana_rh1_roles :
master3:slave:worker:standby
+ hana_rh1_site : DC1
+ master-rsc_SAPHana_RH1_HDB10 : 80
* Node dc1hana04 (4):
+ hana_rh1_clone_state : DEMOTED
+ hana_rh1_roles : master1:slave:standby:slave
+ hana_rh1_site : DC1
+ master-rsc_SAPHana_RH1_HDB10 : 80
* Node dc2hana01 (5):
+ hana_rh1_clone_state : PROMOTED
+ hana_rh1_roles :
master1:master:worker:master
+ hana_rh1_site : DC2
+ master-rsc_SAPHana_RH1_HDB10 : 150
* Node dc2hana02 (6):
+ hana_rh1_clone_state : DEMOTED
+ hana_rh1_roles : master3:slave:worker:slave
+ hana_rh1_site : DC2
+ master-rsc_SAPHana_RH1_HDB10 : 110
* Node dc2hana03 (7):
+ hana_rh1_clone_state : DEMOTED
+ hana_rh1_roles : slave:slave:worker:slave
+ hana_rh1_site : DC2
+ master-rsc_SAPHana_RH1_HDB10 : -10000
* Node dc2hana04 (8):
+ hana_rh1_clone_state : DEMOTED
+ hana_rh1_roles :
master1:slave:standby:standby
+ hana_rh1_site : DC2
+ master-rsc_SAPHana_RH1_HDB10 : 115
* Node majoritymaker (9):
+ hana_rh1_roles : :shtdown:shtdown:shtdown
Migration Summary:
* Node dc2hana02 (6):
* Node majoritymaker (9):
* Node dc1hana03 (3):
* Node dc2hana04 (8):
* Node dc2hana03 (7):
* Node dc1hana04 (4):
* Node dc1hana01 (1):
* Node dc1hana02 (2):
* Node dc2hana01 (5):
PCSD Status:
dc1hana01: Online
dc2hana03: Online
dc2hana04: Online
dc1hana03: Online
dc2hana01: Online
majoritymaker: Online
dc2hana02: Online
dc1hana04: Online
dc1hana02: Online
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
SAPHanaSR-showAttr --sid=RH1
Global prim srHook sync_state
------------------------------
global DC1 SOK SOK
Sit lpt lss mns srr
---------------------------------
DC1 1553607125 4 dc1hana01 P
DC2 30 4 dc2hana01 S
H clone_state roles score site
--------------------------------------------------------
1 PROMOTED master1:master:worker:master 150 DC1
2 DEMOTED master2:slave:worker:slave 110 DC1
3 DEMOTED slave:slave:worker:slave -10000 DC1
4 DEMOTED master3:slave:standby:standby 115 DC1
5 DEMOTED master2:master:worker:master 100 DC2
6 DEMOTED master3:slave:worker:slave 80 DC2
7 DEMOTED slave:slave:worker:slave -12200 DC2
8 DEMOTED master1:slave:standby:standby 80 DC2
9 :shtdown:shtdown:shtdownVerify the VIP1 configuration:
[root]# pcs resource show vip_RH1_00
Resource: vip_RH1_00 (class=ocf provider=heartbeat type=IPaddr2)
Attributes: ip=192.168.0.15
Operations: start interval=0s timeout=20s (vip_RH1_00-start-interval-0s)
stop interval=0s timeout=20s (vip_RH1_00-stop-interval-0s)
monitor interval=10s timeout=20s (vip_RH1_00-monitor-interval-10s)This concludes the section for the SAP HANA scale-out resource agent.