Configure a cluster of JBoss EAP servers in a managed domain and deploy a cluster-aware application.
| Resources | |
|---|---|
| Files |
|
| Application URL |
http://workstation:9080/cluster
|
Outcomes
You should be able to configure a cluster of Red Hat JBoss Enterprise Application Platform (JBoss EAP) server instances and deploy a cluster-aware application to test load balancing and failover.
Use the following command to prepare the environment:
[student@workstation ~]$ lab start cluster-review
A JBoss EAP administrator has set up a managed domain with two host controllers running on servera and serverb machines.
The domain controller runs on the workstation machine.
The domain and host configuration files are stored in the /opt/domain directory on all three machines.
![]() |
In this exercise you start the managed domain, and configure a two nodes cluster. You run another JBoss EAP standalone instance in the workstation machine to act as the load balancer. You can use either the management console or the management CLI to achieve your objectives, keeping in mind that the management CLI is the preferred option in production environments.
Instructions
Verify that the needed ports in the firewall are open by running the following command on the three machines:
[student@workstation ~]$sudo firewall-cmd --list-all --zone=publicpublic (active) target: default icmp-block-inversion: no interfaces: eth0 sources: services: cockpit dhcpv6-client pulseaudio ssh ports:8888/tcp 9990/tcp 3306/tcp 8080/tcp 8180/tcp 8443/tcp 23364/udp 45688/udp 45700/udp 55200/udp 54200/tcp 8009/tcp 7600/tcp 57600/tcp 9080/tcp...output omitted... [student@workstation ~]$ssh root@servera firewall-cmd --list-all --zone=publicpublic (active) target: default icmp-block-inversion: no interfaces: eth0 sources: services: cockpit dhcpv6-client ssh ports:9990/tcp 3306/tcp 8080/tcp 8180/tcp 8443/tcp 23364/udp 45688/udp 45700/udp 55200/udp 54200/tcp 8009/tcp 7600/tcp 57600/tcp 9080/tcp...output omitted... [student@workstation ~]$ssh root@serverb firewall-cmd --list-all --zone=publicpublic (active) target: default icmp-block-inversion: no interfaces: eth0 sources: services: cockpit dhcpv6-client ssh ports:9990/tcp 3306/tcp 8080/tcp 8180/tcp 8443/tcp 23364/udp 45688/udp 45700/udp 55200/udp 54200/tcp 8009/tcp 7600/tcp 57600/tcp 9080/tcp...output omitted...
Start the domain controller on the workstation machine.
Use /opt/domain as the jboss.domain.base.dir value.
The host file for the domain controller is called host-master.xml.
Note that the jboss user owns the /opt/domain directory.
Therefore, you must start the domain controller by using the jboss user.
Start the load-balancer instance on the workstation machine.
Use /opt/standalone as the jboss.server.base.dir property value.
Use the standalone-load-balancer.xml configuration file, and apply a port offset of 1000.
This load balancer is going to be the entry point for the application.
Thus, you must use the 172.25.250.9 workstation public IP to listen.
Note that the jboss user owns the /opt/domain directory.
Therefore, you must start the load balancer by using the jboss user.
Configure a cluster of two JBoss EAP server instances with the following specifications:
Configure TCP based clustering for the cluster communication between JBoss EAP server instances by modifying and running the /tmp/new-tcp-stack.cli provided file.
Use the mod_cluster based dynamic load balancer.
The load balancer must load balance requests among the servers in Group1.
Deploy the cluster.war application on the Group1 server group.
Disable the front-end load-balancer default UDP based auto-discovery from the back-end JBoss EAP server instances.
Configure a static list of proxies for the JBoss EAP server instances.
Configure a TCP based cluster of JBoss EAP servers in the managed domain
Use the management CLI to define a new TCP stack configuration.
Open the /tmp/new-tcp-stack.cli file, and review the commands.
Edit this file as the jboss user.
Replace the initial_hosts property values with the host name and ports of the two JBoss EAP server instances.
Execute the management CLI script file on the domain controller.
Launch the management CLI and connect to the domain controller to configure the servers in the managed domain.
Configure the modcluster subsystem.
By default, JBoss EAP advertises its status to load balancers by using UDP multicasting.
Disable advertising in the modcluster subsystem for the full-ha profile.
Configure the back-end JBoss EAP nodes with a list of proxies or load balancers.
Configure the back-end nodes to communicate with the load balancer that is running on the workstation machine.
Verify that you add an outbound socket binding that points to the load balancer IP address and port (172.25.250.9:9080).
[domain@172.25.250.9:9990 /]/socket-binding-group=full-ha-sockets\/remote-destination-outbound-socket-binding=lb:\add(host=172.25.250.9,port=9080){ "outcome" => "success", "result" => undefined, "server-groups" => undefined }
Then, add the proxies to the mod_cluster configuration:
[domain@172.25.250.9:9990 /]/profile=full-ha/subsystem=modcluster\/mod-cluster-config=\configuration:list-add(name=proxies,value=lb){ "outcome" => "success", "result" => undefined, "server-groups" => undefined }
Reload the domain controller by using the management CLI.
Configure the load balancer.
Launch the management CLI and connect to the load balancer instance on the workstation machine.
Configure the modcluster subsystem to act as a front-end load balancer by setting the mod_cluster filter.
Connect to the load balancer by using the modcluster socket-binding, and use the HTTP protocol for the management socket binding.
As a final step, bind the modcluster filter to the undertow default-server.
Reload the load balancer configuration.
Start the host controllers on the servera and the serverb machines.
Use /opt/domain as the jboss.domain.base.dir value.
The host file for the host controllers is host-slave.xml.
Note that the jboss user owns the /opt/domain directory.
Therefore, you must start the host controller by using the jboss user.
You can safely ignore the following output:
WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.wildfly.extension.elytron.SSLDefinitions (jar:file:/opt/jboss-eap-7.4/modules/system/layers/base/.overlays/layer-base-jboss-eap-7.4.11.CP/org/wildfly/extension/elytron/main/wildfly-elytron-integration-15.0.26.Final-redhat-00001.jar!/) to method com.sun.net.ssl.internal.ssl.Provider.isFIPS() WARNING: Please consider reporting this to the maintainers of org.wildfly.extension.elytron.SSLDefinitions WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release
For more information, refer to https://access.redhat.com/solutions/4996491.
Start the host controller on the servera machine.
Open a new terminal window on the workstation machine and run the following command:
[student@workstation ~]$ssh -t jboss@servera /opt/jboss-eap-7.4/bin/domain.sh \-Djboss.domain.base.dir=/opt/domain/ \-Djboss.domain.master.address=172.25.250.9 \--host-config=host-slave.xml
Note the use of the -t option for the ssh command to allocate a pseudo-terminal.
You must use this option to properly propagate the signals that Ctrl+C sends to the remote machine.
Start the host controller on the serverb machine.
Open a new terminal window on the workstation machine and run the following command:
[student@workstation ~]$ssh -t jboss@serverb /opt/jboss-eap-7.4/bin/domain.sh \-Djboss.domain.base.dir=/opt/domain/ \-Djboss.domain.master.address=172.25.250.9 \--host-config=host-slave.xml
Note the use of the -t option for the ssh command to allocate a pseudo-terminal.
You must use this option to properly propagate the signals that Ctrl+C sends to the remote machine.
Verify that both host controllers connect to the domain controller and form a managed domain.
Inspect the console window where you started the domain controller and verify that both servera and serverb are registered as secondary controllers.
[Host Controller] 03:28:53,434 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: JBoss EAP 7.4.11.GA (WildFly Core 15.0.26.Final-redhat-00001) (Host Controller) started in 3622ms - Started 80 of 81 services (22 services are lazy, passive or on-demand) [Host Controller] 03:29:12,302 INFO [org.jboss.as.domain.controller] (Host Controller Service Threads - 37)WFLYHC0019: Registered remote slave host "servera", JBoss JBoss EAP 7.4.11.GA (WildFly 15.0.26.Final-redhat-00001) [Host Controller] 03:29:47,951 INFO [org.jboss.as.domain.controller] (Host Controller Service Threads - 37)WFLYHC0019: Registered remote slave host "serverb", JBoss JBoss EAP 7.4.11.GA (WildFly 15.0.26.Final-redhat-00001)
Inspect the load balancer terminal window, and verify that the load balancer registers the two server instances from Group1.
07:14:50,073 INFO [io.undertow] (default task-1) UT005053: Registering node servera:servera.1, connection: ajp://172.25.250.10:8009/?# 07:15:00,103 INFO [io.undertow] (default task-1) UT005045: Registering context /, for node servera:servera.1 07:15:00,107 INFO [io.undertow] (default task-1) UT005045: Registering context /wildfly-services, for node servera:servera.1 07:15:16,559 INFO [io.undertow] (default task-1) UT005053: Registering node serverb:serverb.1, connection: ajp://172.25.250.11:8009/?# 07:15:26,572 INFO [io.undertow] (default task-1) UT005045: Registering context /wildfly-services, for node serverb:serverb.1 07:15:26,673 INFO [io.undertow] (default task-1) UT005045: Registering context /, for node serverb:serverb.1
Deploy and test the cluster test application.
Using the management CLI, stop the servers in the server group Group2 because they are not used in this lab.
Deploy the /tmp/cluster.war test application to the Group1 server group.
Inspect the load balancer terminal window and verify that the load balancer registers the servera.1 and serverb.1 server instances, and they are ready to serve the /cluster context.
Navigate to the load balancer at http://172.25.250.9:9080/cluster.
You must see the cluster application.
Refresh the browser several times and notice that each request is served by the same server.
This is due to the session stickiness.
Determine which server instance is handling your current request by looking at the label in the application.
Stop the host that is actively handling the requests by pressing Ctrl+C in the appropriate terminal window to shut down the entire host controller.
Observe the load balancer terminal window and verify that the server you killed and the /cluster context for that server instance have been unregistered from the load balancer.
...output omitted...
07:30:43,147 INFO [io.undertow] (default task-1) UT005047: Unregistering context /, from node servera:servera.1
07:30:43,179 INFO [io.undertow] (default task-1) UT005047: Unregistering context /wildfly-services, from node servera:servera.1
07:30:43,204 INFO [io.undertow] (default task-1) UT005047: Unregistering context /cluster, from node servera:servera.1Return to the web browser and refresh the page. The load balancer fails over your request to the other remaining server without losing the current visits value.
Clean up.
Press Ctrl+C in the terminal window where you started the host controllers and the domain controller to stop the managed domain.
Press Ctrl+C to shut down the load balancer instance in the terminal window where you started it.
Press Ctrl+C to exit the management CLI if you used the CLI in the lab.