Bookmark this page

Orientation to the Classroom Environment

Classroom Environment

Figure 0.1: Classroom environment

In this course, the main computer system that is used for hands-on learning activities is workstation. The workstation virtual machine (VM) is the only one with a graphical desktop. Always log in directly to workstation first.

From workstation, use SSH for command-line access to all other VMs. The workstation machine has a standard user account, student, with student as the password. The student user account can become the root user if necessary. No exercise in this course requires you to log in directly as root, but if you must, then the password is redhat.

You perform all exercises on the hana1, hana2, nodea, nodeb, nodec, and noded machines. For these machines, the password for the student user is student, with the same privileges as on the workstation machine.

As you can see in SectionFigure 0.1: Classroom environment, all VMs share the lab.example.com DNS domain, 172.25.250.0/24.

Alongside the lab.example.com network, three other networks are also in use: private.example.com (192.168.0.0/24) for private cluster communications, and san01.example.com (192.168.1.0/24) and san02.example.com (192.168.2.0/24) for iSCSI and NFS storage traffic.

In the classroom environment, only the private.example.com network is used for private cluster communications. The private cluster communication network is critical in the cluster infrastructure, because the whole cluster requires this network to work. For this reason, Red Hat recommends to improve the cluster resilience by using network redundancy in production environments. For more details, see the RH436 course, Red Hat High Availability Clustering.

The system named bastion must always be running. The bastion system acts as a router between the network that connects your lab machines and the classroom network. If bastion is down, then other lab machines might not function properly, or might even hang during boot.

Several systems in the classroom provide supporting services. The utility.example.com server acts as an NFS file server for the SAP software.

Note

The SAP software cannot be provided as part of the course due to licensing restrictions. The software must be downloaded with an SAP S-User with download permission that your company provides.

An iSCSI and NFS storage server, storage.lab.example.com, is also provided. Information about how to use these servers is provided in the instructions for those activities.

The Classroom Machines table provides information for the different machines that are used in the classroom environment, including their role and IP addresses.

Table 1. Classroom Machines

Machine nameIP addressesRole
workstation.lab.example.com172.25.250.9Graphical workstation for system administration.
classroom.example.com172.25.252.254Router to link the classroom network to the internet.
bastion.lab.example.com172.25.250.254 172.25.252.1Router to link VMs to central servers.
power172.25.250.100 192.168.0.100Machine to simulate the fencing devices through BMC or chassis.
nodea.lab.example.com172.25.250.10 192.168.0.10 192.168.1.10 192.168.2.10 172.25.150.81S/4HANA Application Server A
nodeb.lab.example.com172.25.250.11 192.168.0.11 192.168.1.11 192.168.2.11 172.25.150.82S/4HANA Application Server B
nodec.lab.example.com172.25.250.12 192.168.0.12 192.168.1.12 192.168.2.12 172.25.150.83S/4HANA Application Server C
noded.lab.example.com172.25.250.13 192.168.0.13 192.168.1.13 192.168.2.13 172.25.250.84S/4HANA Application Server D
storage.lab.example.com172.25.250.15 192.168.1.15 192.168.2.15iSCSI and NFS storage server
hana1.lab.example.com172.25.250.22 192.168.0.22 192.168.1.22 192.168.2.22 172.25.250.80HANA database server 1
hana2.lab.example.com172.25.250.23 192.168.0.23 192.168.1.23 192.168.2.23HANA database server 2

Fencing Environment

Fencing is one important part of a high availability cluster. It restricts the access of an unresponsive node to the cluster resources. Fencing is explained in detail in the RH436 course, Red Hat High Availability Clustering. Two different fencing methods are used during this course:

Fencing through simulated BMC

The following machines are connected to the fence network.

Figure 0.2: Simulated Baseboard Management Controller (BMC) environment

Because virtual machines have no embedded BMC device for power management, the BMC function is simulated by using the power machine.

The simulated BMC mechanism performs monitoring and management tasks remotely on the cluster nodes.

The IP addresses of the BMC devices (192.168.0.101, 192.168.0.102, 192.168.0.103, 192.168.0.104, 192.168.0.105, and 192.168.0.106 for the nodea, nodeb, nodec, noded, hana1, and hana2 machines, respectively) are hosted on the power machine.

The BMC IP addresses and node names are assigned when the classroom environment is built. The openstackbmc service runs on the power machine with one process for each power-managed cluster node. This service responds to the Intelligent Platform Management Interface (IPMI) requests on behalf of the corresponding node.

[root@power ~]# systemctl status openstackbmc
  openstackbmc.service - OpenStack BMC using fakeipmi
   Loaded: loaded (/etc/systeml/system/openstackbmc.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2021-05-18 03:01:24 EDT; 2h 21m ago
  Process: 912 ExecStart=/usr/local/bin/openstackbme-wrap.bash (code=exited, status=0/SUCCESS)
    Tasks: 13 (limit: 5036)
   Memory: 171.8M
   CGroup: /system.slice/openstackbmc.service
           |-1285 python /usr/local/bin/openstackbmc.py --project-name=ole-fb6154cc-2848-48b8-b81b-ac78d341a3cb --vm-name=noded
           |-1288 python /usr/local/bin/openstackbmc.py --project-name=ole-fb6154cc-2848-48b8-b81b-ac78d341a3cb --vm-name=nodec
           |-1291 python /usr/local/bin/openstackbmc.py --project-name=ole-fb6154cc-2848-48b8-b81b-ac78d341a3cb --vm-name=nodeb
           |-1294 python /usr/local/bin/openstackbmc.py --project-name=ole-fb6154cc-2848-48b8-b81b-ac78d341a3cb --vm-name=nodea

...output omitted...

For all the BMC devices, the login is admin and the password is password.

Fencing through simulated chassis

The fence network here is similar to the previous one.

Figure 0.3: Simulated chassis

The second fencing method that is used in this course simulates a management chassis, such as ibmblade, hpblade, or bladecenter).

With this method, only the chassis IP address is needed to request fencing.

This fencing method uses the fence_ipmilan custom script and the pcmk_host_map parameter to assign a plug number to each cluster node. A request that is sent to the chassis IP (192.168.0.100) includes the node to fence, which the plug number specifies. The fence_ipmilan script converts the request to an IPMI call to the Fencing through simulated BMC method.

[root@nodeX ~]# cat /usr/sbin/fence_ipmilan
...output omitted...
ip_name_mapping = {
    "nodea": socket.gethostbyname("bmc-nodea"),
    "nodeb": socket.gethostbyname("bmc-nodeb"),
    "nodec": socket.gethostbyname("bmc-nodec"),
    "noded": socket.gethostbyname("bmc-noded"),
}
...output omitted...

In this classroom, the power machine, to simulate a chassis, performs the request, such as to power off the virtual machine that is associated with the node to fence.

Managing the High Availability Cluster in the Classroom

Procedures for managing high availability clusters in the classroom differ from production environments. Typically, high availability clusters require resilient, low-latency communication between all nodes. Nodes can be taken offline or be rebooted for maintenance, without functional loss. In production, the full environment is rarely if ever shut down completely.

Note

Always refer to current Red Hat High Availability Add-On documentation for the supported start and stop procedures for production environments. Classroom procedures that are discussed here might include shortcuts or exclude recommended procedures that are acceptable only for this custom classroom environment.

For more information about performing cluster maintenance, see Chapter 29. Performing Cluster Maintenance in the Configuring and Managing High Availability Clusters guide at https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/configuring_and_managing_high_availability_clusters/index#assembly_cluster-maintenance-configuring-and-managing-high-availability-clusters

You can also refer to Knowledgebase: Recommended Practices for Applying Software Updates to a RHEL High Availability or Resilient Storage Cluster at https://access.redhat.com/articles/2059253

The major difference in the classroom environment is that it uses virtual machines that are either deployed online or on a single physical system, whereas production environments typically use bare metal systems. Most training locations, both online and physical, automatically shut down student systems after timed use or after class hours.

Although automated shutdowns are not graceful, current high availability clusters are resilient enough for classrooms to restart and operate without difficulty.

SAP HANA and SAP S/4 HANA Installation in the Classroom

Typically, SAP HANA and SAP S/4HANA require much larger systems and dedicated storage in production. SAP HANA has dedicated disk performance requirements, so that dedicated striped data volumes are required in production environments. This course uses one virtualized disk and reduced server memory to minimize the resource consumption in the environment, to increase the number of students who can take this training in parallel. Hence, the course uses SAP S/4HANA Foundation, which installs only certain core components of SAP S/4HANA, and not the full SAP S/4HANA Business Software, on the SAP HANA database. The main difference from a regular installation is the much smaller and faster database load. From an OS and installation procedure viewpoint, no difference applies.

Shutting down the Classroom Environment

A high availability cluster infrastructure is designed for long-term operation.

However, you might need to reboot nodes in the cluster. If you need to reboot all the nodes, then it is advisable to reboot each node individually one after another, because rebooting all nodes at the same time can cause service downtime.

Recommended classroom practice is to gracefully stop your high availability cluster when it is finished for an extended time. To stop the cluster services, execute the following command:

[root@node ~]# pcs cluster stop --all

If you forget to shut down manually, then your course environment might time out and be shut down automatically. If you encounter major issues with restarting your environment, then discard and reprovision your student environment by using the correct procedure for the delivery environment.

Resetting Your Classroom Environment

Resetting your classroom environment is the procedure to revert some or all of your classroom nodes to their original state when the course was first created. By resetting, you can clean your virtual machines and start the exercises again. This way, you can also clear any classroom issue that is blocking your progress and that is not getting solved.

This classroom has specific constraints for resetting part or all of your environment.

In most Red Hat training courses, individual systems can be reset separately as needed. However, in this course, resetting only a single cluster node results in that node losing necessary information and failing to communicate with other nodes as part of the cluster after resetting it.

Thus, the correct procedure is to remove first the node from your cluster and then to reset it. Removing a cluster node from an existing cluster is a two-step process.

First, remove the node from the cluster:

[root@oldnode ~]# pcs cluster remove newnode.example.com
...output omitted...

Then, adjust the fencing configuration, either by removing the dedicated fence device, or by reconfiguring the shared fence device to reflect the removal of one node:

[root@oldnode ~]# pcs stonith delete fence_deletednode
...output omitted...

If you need to reset any node in the course, then you must run all the lab exercises on that node before continuing.

The commands for resetting individual nodes are discussed in the upcoming "Controlling Your Systems" section.

The Nodes of the High Availability Cluster

Some classroom VMs are not modified during exercises, and never need to be reset unless you are solving a technical problem. For example, the workstation machine needs to be reset only if it becomes unstable or out of communication, and could be reset by itself.

This table lists the machines that are never intended to be reset, and those machines that can be reset if necessary:

Table 2. Which machines are normally reset or not reset?

Typically do not need to be resetIf required, can be reset
  • bastion

  • classroom

  • power

  • nodea

  • nodeb

  • nodec

  • noded

  • hana1

  • hana2

  • storage


In the online environment, you can reset the selected machine by clicking ACTIONReset.

Note

If you reset the power machine, then the fencing resources might fail and stop when it times out. In that case, you must reset the fail count for that resource, to enable its use on the cluster again. For this purpose, execute the following command:

[root@node ~]# pcs resource cleanup my_resource

You can also reset the classroom environment by re-creating the original course build. Re-creating the course is quick, typically taking only a few minutes, and results in a clean, working environment.

Warning

For the sake of time, re-creating the course is useful to avoid troubleshooting in a classroom environment. However, in a production environment, probably you cannot start your cluster from the outset. Thus, you must troubleshoot your cluster so that it is ready again.

In the online environment, click the DELETE button. Wait, and then click the CREATE button.

Controlling Your Systems

You are assigned remote computers in a Red Hat Online Learning classroom. They are accessed through a web application hosted at . You should log in to this site using your Red Hat Customer Portal user credentials.

Controlling the Virtual Machines

The virtual machines in your classroom environment are controlled through a web page. The state of each virtual machine in the classroom is displayed on the page under the Online Lab tab.

Table 3. Machine States

Virtual Machine State Description
STARTING The virtual machine is in the process of booting.
STARTED The virtual machine is running and available (or, when booting, soon will be).
STOPPING The virtual machine is in the process of shutting down.
STOPPED The virtual machine is completely shut down. Upon starting, the virtual machine boots into the same state as when it was shut down (the disk will have been preserved).
PUBLISHING The initial creation of the virtual machine is being performed.
WAITING_TO_START The virtual machine is waiting for other virtual machines to start.

Depending on the state of a machine, a selection of the following actions is available.

Table 4. Classroom/Machine Actions

Button or Action Description
PROVISION LAB Create the ROL classroom. Creates all of the virtual machines needed for the classroom and starts them. Can take several minutes to complete.
DELETE LAB Delete the ROL classroom. Destroys all virtual machines in the classroom. Caution: Any work generated on the disks is lost.
START LAB Start all virtual machines in the classroom.
SHUTDOWN LAB Stop all virtual machines in the classroom.
OPEN CONSOLE Open a new tab in the browser and connect to the console of the virtual machine. You can log in directly to the virtual machine and run commands. In most cases, you should log in to the workstation virtual machine and use ssh to connect to the other virtual machines.
ACTIONStart Start (power on) the virtual machine.
ACTIONShutdown Gracefully shut down the virtual machine, preserving the contents of its disk.
ACTIONPower Off Forcefully shut down the virtual machine, preserving the contents of its disk. This is equivalent to removing the power from a physical machine.
ACTIONReset Forcefully shut down the virtual machine and reset the disk to its initial state. Caution: Any work generated on the disk is lost.

At the start of an exercise, if instructed to reset a single virtual machine node, click ACTIONReset for only the specific virtual machine.

At the start of an exercise, if instructed to reset all virtual machines, click ACTIONReset

If you want to return the classroom environment to its original state at the start of the course, you can click DELETE LAB to remove the entire classroom environment. After the lab has been deleted, you can click PROVISION LAB to provision a new set of classroom systems.

Warning

The DELETE LAB operation cannot be undone. Any work you have completed in the classroom environment up to that point will be lost.

The Autostop Timer

The Red Hat Online Learning enrollment entitles you to a certain amount of computer time. To help conserve allotted computer time, the ROL classroom has an associated countdown timer, which shuts down the classroom environment when the timer expires.

To adjust the timer, click MODIFY to display the New Autostop Time dialog box. Set the number of hours until the classroom should automatically stop. Note that there is a maximum time of ten hours. Click ADJUST TIME to apply this change to the timer settings.

Revision: rh445-8.4-4e0c572