Configure the JGroups UDP and TCP stack to enable clustering of web applications.
| Resources | |
|---|---|
| Files: |
|
| Application URL: |
|
| Resources |
|
Outcomes
You should be able to configure and test a two-node cluster of Red Hat JBoss Enterprise Application Platform (JBoss EAP) instances by using the default UDP stack and a custom TCP stack.
Run the following command to prepare the environment.
[student@workstation ~]$ lab start cluster-subsystems
Instructions
The classroom environment is prepared to allow traffic in the JGroups ports. Verify that the ports in the firewall are opened by running the following command:
[student@workstation ~]$sudo firewall-cmd --list-all --zone=publicpublic (active) target: default icmp-block-inversion: no interfaces: eth0 sources: services: cockpit dhcpv6-client pulseaudio ssh ports: 8888/tcp 9990/tcp 3306/tcp23364/udp 45688/udp 45700/udp 55200/udp 54200/tcp 55300/udp 54300/tcp 7600/tcp 7700/tcp 57600/tcp 57700/tcp8080/tcp 8180/tcp ...output omitted...
To learn how to configure the firewall rules in Red Hat Enterprise Linux 9, see the /home/student/AD248/labs/cluster-subsystems/jgroups-firewall-rules.sh file.
Start the standalone JBoss EAP servers.
In this guided exercise, you use two standalone instances running the standalone-full-ha.xml configuration.
The standalone-full-ha.xml configuration file defines a private interface in the <interfaces> section.
The private interface is used for internal cluster communication.
In a production environment, the public and the private interfaces are bound to two physically separated networks with different IP addresses.
In this lab, we are binding both interfaces to the IP of the workstation machine: 172.25.250.9.
Start the first standalone server with the following requirements:
Use the /home/student/AD248/labs/cluster-subsystems/jgroups-cluster1 directory as the base directory.
Use the standalone-full-ha.xml configuration file.
Set the public interface IP as 172.25.250.9.
Set the private interface IP as 172.25.250.9.
Set the node name as jgroups-cluster1
Set a password to the messaging subsystem by using the -Djboss.messaging.cluster.password=mqpass parameter
To start the server, run the following commands from the workstation machine:
[student@workstation ~]$cd /opt/jboss-eap-7.4/bin[student@workstation bin]$./standalone.sh \--server-config=standalone-full-ha.xml \-Djboss.server.base.dir=/home/student/AD248/labs/\cluster-subsystems/jgroups-cluster1 \-Djboss.bind.address=172.25.250.9 \-Djboss.bind.address.private=172.25.250.9 \-Djboss.node.name=jgroups-cluster1 \-Djboss.messaging.cluster.password=mqpass
You can safely ignore the following output:
WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.wildfly.extension.elytron.SSLDefinitions (jar:file:/opt/jboss-eap-7.4/modules/system/layers/base/.overlays/layer-base-jboss-eap-7.4.11.CP/org/wildfly/extension/elytron/main/wildfly-elytron-integration-15.0.26.Final-redhat-00001.jar!/) to method com.sun.net.ssl.internal.ssl.Provider.isFIPS() WARNING: Please consider reporting this to the maintainers of org.wildfly.extension.elytron.SSLDefinitions WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release
Verify that the first instance started without errors.
In the standalone-full-ha.xml file, there is no console handler defined by default, hence the need to use the tail command to view the server logs.
Open a terminal in the workstation machine, and inspect the /home/student/AD248/labs/cluster-subsystems/jgroups-cluster1/log/server.log file by using the tail -f command.
[student@workstation ~]$tail -f \/home/student/AD248/labs/cluster-subsystems/jgroups-cluster1/log/server.log2023-10-30 03:59:37,992 INFO [org.wildfly.extension.messaging-activemq] (ServerService Thread Pool -- 84) WFLYMSGAMQ0002: Bound messaging object to jndi name java:jboss/exported/jms/RemoteConnectionFactory 2023-10-30 03:59:38,039 INFO [org.jboss.as.connector.deployment] (MSC service thread 1-7) WFLYJCA0007: Registered connection factory java:/JmsXA 2023-10-30 03:59:38,107 INFO [org.apache.activemq.artemis.ra] (MSC service thread 1-7) AMQ151007: Resource adaptor started 2023-10-30 03:59:38,107 INFO [org.jboss.as.connector.services.resourceadapters.ResourceAdapterActivatorService$ResourceAdapterActivator] (MSC service thread 1-7) IJ020002: Deployed: file://RaActivatoractivemq-ra 2023-10-30 03:59:38,110 INFO [org.jboss.as.connector.deployment] (MSC service thread 1-1) WFLYJCA0002: Bound Jakarta Connectors ConnectionFactory [java:/JmsXA] 2023-10-30 03:59:38,110 INFO [org.jboss.as.connector.deployment] (MSC service thread 1-1) WFLYJCA0118: Binding connection factory named java:/JmsXA to alias java:jboss/DefaultJMSConnectionFactory 2023-10-30 03:59:38,279 INFO [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0212: Resuming server 2023-10-30 03:59:38,283 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: JBoss EAP 7.4.11.GA (WildFly Core 15.0.26.Final-redhat-00001) started in 9286ms - Started 441 of 712 services (468 services are lazy, passive or on-demand) 2023-10-30 03:59:38,285 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0060: Http management interface listening on http://127.0.0.1:9990/management 2023-10-30 03:59:38,285 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0051: Admin console listening on http://127.0.0.1:9990
Leave the tail command running in the terminal.
Start the second standalone server with the following requirements:
Use the /home/student/AD248/labs/cluster-subsystems/jgroups-cluster2 directory as the base directory.
Use the standalone-full-ha.xml configuration file.
Set the public interface IP as 172.25.250.9.
Set the private interface IP as 172.25.250.9.
To avoid port conflicts, set a port-offset with a value of 100.
Set the node name as jgroups-cluster2
Set a password to the messaging subsystem by using the -Djboss.messaging.cluster.password=mqpass parameter
To start the server, run the following commands from the workstation machine:
[student@workstation ~]$cd /opt/jboss-eap-7.4/bin[student@workstation bin]$./standalone.sh \--server-config=standalone-full-ha.xml \-Djboss.server.base.dir=/home/student/AD248/labs/\cluster-subsystems/jgroups-cluster2 \-Djboss.bind.address=172.25.250.9 \-Djboss.bind.address.private=172.25.250.9 \-Djboss.socket.binding.port-offset=100 \-Djboss.node.name=jgroups-cluster2 \-Djboss.messaging.cluster.password=mqpass
Observe the log file of jgroups-cluster1 after the jgroups-cluster2 instance starts successfully.
The following entries show that there are now two members in the cluster:
...output omitted...2023-10-30 04:07:03,304 INFO [org.infinispan.CLUSTER] (thread-6,ejb,jgroups-cluster1) ISPN000094: Received new cluster view for channel ejb: [jgroups-cluster1|1] (2) [jgroups-cluster1, jgroups-cluster2]2023-10-30 04:07:03,305 INFO [org.infinispan.CLUSTER] (thread-6,ejb,jgroups-cluster1) ISPN100000: Node jgroups-cluster2 joined the cluster2023-10-30 04:07:05,099 INFO [org.apache.activemq.artemis.core.server] (Thread-1 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$6@7f38312d)) AMQ221027: Bridge ClusterConnectionBridge@6888dbe0 [name=$.artemis.internal.sf.my-cluster.4f791c5f-76fb-11ee-90de-3ed52c6e4d38, queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.4f791c5f-76fb-11ee-90de-3ed52c6e4d38, postOffice=PostOfficeImpl ...output omitted...
The ejb channel is clustered because the messaging subsystems forms a cluster.
The web channel is not active until you deploy a distributable web application.
Verify that the two JBoss EAP instances in the cluster are using the UDP stack to communicate.
Use the tcpdump command tool to monitor traffic on the workstation machine.
The communication happens on the 230.0.0.4 multicast address on the 45688 port by default.
Open a new terminal window and run the following command:
[student@workstation ~]$sudo tcpdump -i eth0 udp port 45688 -vvvdropped privs to tcpdump tcpdump: listening on eth0, link-type EN10MB (Ethernet), snapshot length 262144 bytes 04:13:56.518290 IP (tos 0x0, ttl 2, id 4632, offset 0, flags [DF], proto UDP (17), length 132) workstation.lab.example.com.55200 >230.0.0.4.45688: [bad udp cksum 0x8ca9 -> 0xb7c6!] UDP, length 104 04:14:02.902116 IP (tos 0x0, ttl 2, id 7109, offset 0, flags [DF], proto UDP (17), length 132) ...output omitted...
Press Ctrl+C to stop the command.
Deploy the cluster.war application.
Use the /home/student/AD248/labs/cluster-subsystems/cluster.war application.
You must deploy the cluster application to the two standalone servers.
Open a new terminal window and run the management CLI. Connect to the first cluster instance by running the following commands:
[student@workstation ~]$cd /opt/jboss-eap-7.4/bin[student@workstation bin]$./jboss-cli.sh -c[standalone@localhost:9990 /]
Deploy the cluster application:
[standalone@localhost:9990 /]deploy \/home/student/AD248/labs/cluster-subsystems/cluster.war
Observe the log file of jgroups-cluster1 after deploying the application.
The web channel is now active:
...output omitted... 2023-10-30 04:16:56,409 INFO [org.jboss.as.server.deployment] (MSC service thread 1-1) WFLYSRV0027: Starting deployment of "cluster.war" (runtime-name: "cluster.war") 2023-10-30 04:16:56,814 INFO [org.infinispan.CONFIG] (MSC service thread 1-4) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated. 2023-10-30 04:16:56,815 INFO [org.infinispan.CONFIG] (MSC service thread 1-4) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated. 2023-10-30 04:16:57,029 INFO [org.infinispan.CONFIG] (MSC service thread 1-4) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated. 2023-10-30 04:16:57,030 INFO [org.infinispan.CONFIG] (MSC service thread 1-4) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated. 2023-10-30 04:16:57,165 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 85) WFLYCLINF0002: Started http-remoting-connector cache from ejb container2023-10-30 04:16:57,167 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 91) WFLYCLINF0002: Started default-server cache from web container2023-10-30 04:16:57,241 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 92) WFLYCLINF0002: Started cluster.war cache from web container2023-10-30 04:16:57,431 INFO [org.wildfly.extension.undertow] (ServerService Thread Pool -- 85) WFLYUT0021: Registered web context: '/cluster' for server 'default-server' 2023-10-30 04:16:57,520 INFO [org.jboss.as.server] (management-handler-thread - 1) WFLYSRV0010: Deployed "cluster.war" (runtime-name : "cluster.war") ...output omitted...
Connect to the second cluster instance:
[standalone@localhost:9990 /] connect localhost:10090
[standalone@localhost:10090 /]Deploy the cluster application:
[standalone@localhost:10090 /]deploy \/home/student/AD248/labs/cluster-subsystems/cluster.war
Observe the jgroups-cluster1 logs in the terminal running tail after deploying the application to jgroups-cluster2.
The infinispan subsystem replicates data between the two cluster web members:
...output omitted... 2023-10-30 05:15:12,999 INFO [org.infinispan.CLUSTER] (thread-16,ejb,jgroups-cluster1) [Context=http-remoting-connector] ISPN100002:Starting rebalance with members [jgroups-cluster1, jgroups-cluster2], phase READ_OLD_WRITE_ALL, topology id 2 2023-10-30 05:15:12,999 INFO [org.infinispan.CLUSTER] (thread-17,ejb,jgroups-cluster1) [Context=default-server] ISPN100002: Starting rebalance with members [jgroups-cluster1, jgroups-cluster2], phase READ_OLD_WRITE_ALL, topology id 2 2023-10-30 05:15:13,144 INFO [org.infinispan.CLUSTER] (thread-18,ejb,jgroups-cluster1) [Context=cluster.war] ISPN100002: Starting rebalance with members [jgroups-cluster1, jgroups-cluster2], phase READ_OLD_WRITE_ALL, topology id 2 2023-10-30 05:15:13,169 INFO [org.infinispan.CLUSTER] (thread-16,ejb,jgroups-cluster1) [Context=http-remoting-connector] ISPN100009: Advancing to rebalance phase READ_ALL_WRITE_ALL, topology id 3 2023-10-30 05:15:13,173 INFO [org.infinispan.CLUSTER] (thread-19,ejb,jgroups-cluster1) [Context=default-server] ISPN100009: Advancing to rebalance phase READ_ALL_WRITE_ALL, topology id 3 2023-10-30 05:15:13,182 INFO [org.infinispan.CLUSTER] (non-blocking-thread--p5-t3) [Context=default-server] ISPN100009: Advancing to rebalance phase READ_NEW_WRITE_ALL, topology id 4 2023-10-30 05:15:13,187 INFO [org.infinispan.CLUSTER] (thread-16,ejb,jgroups-cluster1) [Context=http-remoting-connector] ISPN100009: Advancing to rebalance phase READ_NEW_WRITE_ALL, topology id 4 2023-10-30 05:15:13,189 INFO [org.infinispan.CLUSTER] (non-blocking-thread--p5-t3) [Context=default-server] ISPN100010: Finished rebalance with members [jgroups-cluster1, jgroups-cluster2], topology id 5 2023-10-30 05:15:13,191 INFO [org.infinispan.CLUSTER] (thread-19,ejb,jgroups-cluster1) [Context=http-remoting-connector] ISPN100010: Finished rebalance with members [jgroups-cluster1, jgroups-cluster2], topology id 5 2023-10-30 05:15:13,209 INFO [org.infinispan.CLUSTER] (thread-19,ejb,jgroups-cluster1) [Context=cluster.war] ISPN100009: Advancing to rebalance phase READ_ALL_WRITE_ALL, topology id 3 2023-10-30 05:15:13,214 INFO [org.infinispan.CLUSTER] (thread-19,ejb,jgroups-cluster1) [Context=cluster.war] ISPN100009: Advancing to rebalance phase READ_NEW_WRITE_ALL, topology id 4 2023-10-30 05:15:13,219 INFO [org.infinispan.CLUSTER] (non-blocking-thread--p5-t4) [Context=cluster.war] ISPN100010:Finished rebalance with members [jgroups-cluster1, jgroups-cluster2], topology id 5 ...output omitted...
Test the cluster with the UDP stack
By default, JBoss EAP is set up for clustering using the UDP stack. In this mode, JBoss EAP nodes can automatically join and leave the cluster without any manual configuration, and the nodes can discover each other by using multicast over UDP.
Navigate to http://172.25.250.9:8080/cluster to view the application running on the jgroups-cluster1 instance.
Refresh the page a few times and observe that the counter is incremented by one every time you refresh the page.
![]() |
Navigate to http://172.25.250.9:8180/cluster to view the application running on the jgroups-cluster2 instance.
Observe that the counter value has not been reset, but incremented by one.
Refresh the page a few times and observe that the counter is incremented by one every time you refresh the page.
Shut down the jgroups-cluster2 instance by pressing Ctrl+C on the terminal window where you started the instance.
Navigate to http://172.25.250.9:8080/cluster to view the application running on the jgroups-cluster1 instance.
Observe that the counter value has not been reset, but reflects the latest value plus one, as observed before shutting down jgroups-cluster2.
Refresh the page a few times and observe that the counter is incremented by one every time you refresh the page. The JBoss EAP clustering component uses the JGroups stack to replicate the counter value across all nodes in the cluster.
Shut down the jgroups-cluster1 instance by pressing Ctrl+C on the terminal window where you started the instance.
Test the cluster with the TCP stack.
In many data center networks UDP or multicast are disabled, and you have to use unicast over TCP for clustering JBoss EAP nodes. In contrast to the UDP stack, where JBoss EAP nodes join a cluster automatically, in the TCP unicast stack configuration you need to configure the IP addresses of all the cluster nodes.
Use the management CLI to define a new TCP stack configuration.
Open the file /home/student/AD248/labs/cluster-subsystems/new-tcp-stack.cli in a text editor and review the commands:
# Add a new TCP stack called "tcpping"
batch
/subsystem="jgroups"/stack="tcpping":add()
/subsystem="jgroups"/stack="tcpping":add-protocol(type="TCPPING")
/subsystem="jgroups"/stack="tcpping"/transport="TRANSPORT":add(socket-binding="jgroups-tcp",type="TCP")
run-batch
# Customize the protocol settings for tcpping
batch
/subsystem="jgroups"/stack="tcpping"/protocol="TCPPING"/property="initial_hosts":add(value="node1[port1],node2[port2]")
/subsystem="jgroups"/stack="tcpping"/protocol="TCPPING"/property="port_range":add(value="10")
/subsystem="jgroups"/stack="tcpping":add-protocol(type="MERGE2")
/subsystem="jgroups"/stack="tcpping":add-protocol(socket-binding="jgroups-tcp-fd",type="FD_SOCK")
/subsystem="jgroups"/stack="tcpping":add-protocol(type="FD")
/subsystem="jgroups"/stack="tcpping":add-protocol(type="VERIFY_SUSPECT")
/subsystem="jgroups"/stack="tcpping":add-protocol(type="BARRIER")
/subsystem="jgroups"/stack="tcpping":add-protocol(type="pbcast.NAKACK")
/subsystem="jgroups"/stack="tcpping":add-protocol(type="UNICAST2")
/subsystem="jgroups"/stack="tcpping":add-protocol(type="pbcast.STABLE")
/subsystem="jgroups"/stack="tcpping":add-protocol(type="pbcast.GMS")
/subsystem="jgroups"/stack="tcpping":add-protocol(type="UFC")
/subsystem="jgroups"/stack="tcpping":add-protocol(type="MFC")
/subsystem="jgroups"/stack="tcpping":add-protocol(type="FRAG2")
/subsystem="jgroups"/stack="tcpping":add-protocol(type="RSVP")
/subsystem=jgroups/channel=ee:write-attribute(name=stack,value=tcpping)
run-batch
:reloadThe initial_hosts property is a comma-separated list of server instances, and their ports in square brackets.
The initial_hosts values are the cluster members.
Edit the initial_hosts property and replace the values with the IP addresses and ports of the two EAP instances as follows (recall that the jgroups-cluster2 instance is running with a port-offset value of 100):
"initial_hosts":add(value="172.25.250.9[7600],172.25.250.9[7700]")Start both JBoss EAP instances again as outlined in previous steps.
Run the following command in a new terminal on the workstation machine to create the new TCP stack on the jgroups-cluster1 instance:
[student@workstation ~]$cd /opt/jboss-eap-7.4/bin[student@workstation bin]$./jboss-cli.sh --connect \--controller=localhost:9990 \--file=/home/student/AD248/labs/cluster-subsystems/new-tcp-stack.cliThe batch executed successfully process-state: reload-required The batch executed successfully process-state: reload-required { "outcome" => "success", "result" => undefined }
Then, reload the server:
[student@workstation bin]$./jboss-cli.sh --connect \--controller=localhost:9990 --command="reload"[student@workstation bin]$
Create the new TCP stack on the jgroups-cluster2 instance by running the following command:
[student@workstation bin]$./jboss-cli.sh --connect \--controller=localhost:10090 \--file=/home/student/AD248/labs/cluster-subsystems/new-tcp-stack.cliThe batch executed successfully process-state: reload-required The batch executed successfully process-state: reload-required { "outcome" => "success", "result" => undefined }
Then, reload the server:
[student@workstation bin]$./jboss-cli.sh --connect \--controller=localhost:10090 --command="reload"[student@workstation bin]$
Verify that a new TCP stack called tcpping is present in the /home/student/AD248/labs/cluster-subssystems/jgroups-clusterX/configuration/standalone-full-ha.xml files of both instances, where 'X' denotes the instance number:
...output omitted...
<stack name="tcpping">
<transport type="TCP" socket-binding="jgroups-tcp"/>
<protocol type="org.jgroups.protocols.TCPPING">
<property name="initial_hosts">172.25.250.9[7600],172.25.250.9[7700]</property>
<property name="port_range">10</property>
</protocol>
<protocol type="MERGE3"/>
...output omitted...
</stack>Also verify that the default stack has been set to the new tcpping definition:
...output omitted...
<subsystem xmlns="urn:jboss:domain:jgroups:8.0">
<channels default="ee">
<channel name="ee" stack="tcpping" cluster="ejb"/>
</channels>
...output omitted...Test again the cluster by navigating to their URLs in each server, and verify that the application behaves in a similar manner as when you configured the instances with the default UDP stack.
Observe the tcpdump command output as outlined in a previous step, and verify that you do not see any UDP traffic on port 45688.
[student@workstation ~]$ sudo tcpdump -i lo udp port 45688 -vvv
dropped privs to tcpdump
tcpdump: listening on lo, link-type EN10MB (Ethernet), snapshot length 262144 bytesPress Ctrl+C to stop the command.
Observe the TCP traffic on localhost port 7600 when both instances are running using the tcpdump command:
[student@workstation ~]$ sudo tcpdump -i lo tcp port 7600 -vvv
dropped privs to tcpdump
tcpdump: listening on lo, link-type EN10MB (Ethernet), snapshot length 262144 bytes
07:15:46.401282 IP (tos 0x0, ttl 64, id 53490, offset 0, flags [DF], proto TCP (6), length 125)
workstation.lab.example.com.7600 > workstation.lab.example.com.45789: Flags [P.], cksum 0x4cb6 (incorrect -> 0x120c), seq 296543478:296543551, ack 1635241034, win 4, options [nop,nop,TS val 3049970180 ecr 3049970080], length 73
07:15:46.401384 IP (tos 0x0, ttl 64, id 55879, offset 0, flags [DF], proto TCP (6), length 52)
workstation.lab.example.com.45789 > workstation.lab.example.com.7600: Flags [.], cksum 0x4c6d (incorrect -> 0x1598), seq 1, ack 73, win 4, options [nop,nop,TS val 3049970180 ecr 3049970180], length 0
07:15:46.501677 IP (tos 0x0, ttl 64, id 53491, offset 0, flags [DF], proto TCP (6), length 125)
workstation.lab.example.com.7600 > workstation.lab.example.com.45789: Flags [P.], cksum 0x4cb6 (incorrect -> 0x10fa), seq 73:146, ack 1, win 4, options [nop,nop,TS val 3049970281 ecr 3049970180], length 73
...output omitted...Press Ctrl+C to stop the command.
Stop the two JBoss EAP instances by pressing Ctrl+C on the terminal window where you started them.
Exit the management CLI sessions by typing quit or press Ctrl+C on the terminal windows.
Exit the tail and tcpdump command sessions by pressing Ctrl+C on the terminal window where you started the commands.