Bookmark this page

Installing SAP S/4 HANA and SAP NetWeaver Resource Agent

Objectives

After completing this section, you should be able to describe, plan, and set up SAP S/4 HANA and SAP NetWeaver.

Overview

SAP S/4HANA systems play an important role in business processes. Thus, it is critical for such systems to be highly available. The underlying idea of clustering is that no single large machine bears all of the load and risk. Rather, one or more machines automatically drop in as an instant full replacement for the service or the machine that failed. In the best case, this replacement process causes no interruption to the systems' users.

This section is intended for SAP and Red Hat certified or trained administrators and consultants who already have experience with setting up highly available solutions by using the RHEL HA add-on or other clustering solutions. Access to both SAP Service Marketplace and Red Hat Customer Portal is required to download software and additional documentation.

For Red Hat Consulting, it is highly recommended to set up the cluster and to customize the solution to meet customers' data center requirements, which are typically more complex than the solution that is presented in this section.

Concepts

This section describes how to set up a two-node or a three-node cluster solution that conforms to the guidelines for high availability that both SAP and Red Hat established. The solution is based on Standalone Enqueue Server 2, which is now the default installation in SAP S/4HANA 1809 or later, on top of Red Hat Enterprise Linux 7.6 or later, or RHEL 8, with the RHEL HA Add-on.

According to SAP, the Standalone Enqueue Server 2 (ENSA2) is the successor to the standalone enqueue server. It is a component of the SAP lock concept, and manages the lock table. This principle ensures the consistency of data in an ABAP system.

Support Policies

It is also important to adhere to the following support policies while configuring such environments: Support Policies for RHEL High Availability Clusters - Management of SAP S/4HANA: https://access.redhat.com/articles/4016901

Requirements

It is also mandatory to keep the subscription, kernel, and patch level identical on all cluster nodes.

Subscription

Follow this kbase article: https://access.redhat.com/solutions/3082471, to subscribe your systems to the Update Service for RHEL for SAP Solutions.

For the lab exercises in this course, the Ansible Automation scripts take care of these needs.

Pacemaker Resource Agents

RHEL for SAP Solutions 7.6 or later is recommended. RHEL 8.0 and later is used in this course.

SAP S/4HANA High-Availability Architecture

As explained in earlier chapters, a typical setup for SAP S/4HANA High Availability consists of the following distinctive components:

  • SAP S/4 ASCS/ERS cluster resources

  • SAP S/4 application servers - Primary application server (PAS) and additional application servers (AAS)

  • Database: SAP HANA

Because the SAP HANA Cluster setup is covered elsewhere in this course, this section focuses only on configuring SAP S/4 HANA ASCS, ERS, PAS, and AAS instances in a Pacemaker cluster. Red Hat recommends installing application servers and database instances on separate nodes.

Two-node Cluster versus Multi-node Cluster

If a failover occurs, then the old Standalone Enqueue Server was required to "follow" the Enqueue Replication Server. The HA software had to start the ASCS instance on the host where the ERS instance was running. In contrast to the old Standalone Enqueue Server, the new Standalone Enqueue Server 2 and Enqueue Replicator 2 no longer have these restrictions, which makes a multi-node cluster possible. For more information about ENSA2, see SAP Note 2630416 - Support for Standalone Enqueue Server 2, https://launchpad.support.sap.com/#/notes/2630416

The ENSA2 in Pacemaker can be configured in either a two-node or a multi-node cluster. In a two-node cluster, ASCS fails over to where ERS is running. In a multi-node cluster, ASCS fails over to a spare node, as illustrated in the following diagram.

Figure 4.7: ASCS failover in a multi-node cluster

The following architecture diagram shows an example installation of a three-node cluster. The example in this section focuses on a two-node cluster setup, with a separate section on the design and configuration of a multi node cluster.

Figure 4.8: Multi-Node cluster architecture for S4/HANA

SAPInstance Resource Agent

SAPInstance is a pacemaker resource agent that is used for both ASCS and ERS resources. All operations of the SAPInstance resource agent use the sapstartsrv SAP start service framework.

Storage Requirements

Put directories that are created for S/4 installation on shared storage, according to the following rules:

Instance-Specific Directories

A separate SAN LUN or NFS export must exist for each instance directory that can be mounted by the cluster on the node where the instance is supposed to be running.

For 'ASCS' and 'ERS', respectively, the instance specific directory must be present on the corresponding nodes as follows:

  • ASCS node: /usr/sap/SID/ASCS<Ins#>

  • ERS node: /usr/sap/SID/ERS<Ins#>

For application servers, make the following directory available on the corresponding designated node for the application server instance:

  • App Server D<Ins#>: /usr/sap/SID/D<Ins#>

When using SAN LUNs for the instance directories, use HA-LVM to ensure that the instance directories can be mounted on only one node at a time. For information about HA-LVM, see https://access.redhat.com/solutions/3067

When using NFS exports, if the directories are created on the same directory tree on an NFS file server, such as NetApp File Shares or Amazon EFS, then the force_unmount=safe option must be used when configuring the Filesystem resource. This option ensures that the cluster stops only the processes that run on the specific NFS export, instead of stopping all processes that run on the directory tree where the exports are created.

Shared Directories

The following mount points must be available on ASCS, ERS, and application server nodes:

/sapmnt
/usr/sap/trans
/usr/sap/SID/SYS

Shared Directories on HANA

The following mount point must be available on the HANA node:

/sapmnt

Shared storage can also be achieved as follows:

  • Use of the external NFS server. NFS server cannot run on any nodes inside the cluster in which the shares would be mounted. For more details about this limitation, see the article Hangs Occur if a Red Hat Enterprise Linux System Is Used as Both NFS Server and NFS Client for the Same Mount, https://access.redhat.com/solutions/22231

  • Use of the GFS2 file system. It requires all nodes also to have the Resilient Storage Add-on subscription.

  • Use of the glusterfs file system. Review the additional notes in the article Can glusterfs Be Used for the SAP NetWeaver Shared File Systems? https://access.redhat.com/solutions/3047511

These mount points must be either managed by the cluster, or be mounted before the cluster is started.

Installation Parameters for SAP S/4

The following configuration options are used for instances in this section.

Two nodes run the ASCS/ERS instances in pacemaker:

1st node hostname:      s4node1
2nd node hostname:      s4node2

SID:                    S4H

ASCS Instance number:   20
ASCS virtual hostname:  s4ascs

ERS Instance number:    29
ERS virtual hostname:   s4ers

PAS Instance number:    21
AAS Instance number:    22

HANA Database:

SID:                    S4D
HANA Instance number:   00
HANA virtual hostname:  s4db

Preparing the Hosts

Before starting installation, ensure that the following requirements are met:

  • Install RHEL for SAP Solutions 7.6+, or RHEL 8 (latest is recommended).

  • Register system to RHN or Satellite, and enable the RHEL for SAP Applications channel or the Update Services (E4S) channel.

  • Enable the High Availability Add-on channel.

  • Shared storage and file systems must be present at the correct mount points.

  • Virtual IP addresses used by instances must be present and reachable.

  • Hostnames used by instances can be resolved to IP addresses and back.

  • Installation media are available.

  • The system is configured according to the recommendation for running SAP S/4.

Installing S/4 HANA

By using the software provisioning manager (SWPM), install instances in the following order:

  • ASCS instance

  • ERS instance

  • DB instance

  • PAS instance

  • AAS instances

Install S/4 HANA on s4node1

The following file systems should be mounted on s4node1, where ASCS will be installed:

/usr/sap/S4H/ASCS20
/usr/sap/S4H/SYS
/usr/sap/trans
/sapmnt

Virtual IP for s4ascs should be enabled on s4node1 at this point.

Run the Installer

[root@s4node1]# ./sapinst SAPINST_USE_HOSTNAME=s4ascs

Select the High-Availability System Option

Figure 4.9: SWPM installation screen

SAP HANA

In the example, SAP HANA uses the following parameters.

SAP HANA SID:                    S4D
SAP HANA Instance number:        00

SAP HANA is installed on a separate cluster in this course, which is covered in earlier chapters.

Because the application server is installed on a different node, it is required to install the HANA client on the application server, so that it can enable the communication between the application server instances and the SAP HANA database.

[root]# ./sapinst SAPINST_USE_HOSTNAME=s4db

Installing Application Servers

The following file systems must also be mounted on the respective host to run the application server instance. If you have multiple application servers, then install each one on the corresponding host:

/usr/sap/S4H/D<Ins#>
/usr/sap/S4H/SYS
/usr/sap/trans
/sapmnt

Run the Installer

[root]# ./sapinst

Select the High-Availability System option.

Follow the same procedure as in the SWPM guides. It is similar to the way that ASCS/ERS is installed.

Revision: rh445-8.4-4e0c572