Configuring Infinispan and JGroups to replicate HTTP sessions is only half of the Red Hat JBoss Enterprise Application Platform (JBoss EAP) web clustering equation. The other half is setting up a network load balancer to direct HTTP requests to JBoss EAP server instances that are cluster members, as depicted in the following diagram:
Other JBoss EAP clustered services, such as JMS clients and remote EJBs, do not require network load balancers. The JMS and EJB clients can handle load balancing and failover. Their client software also handles discovering existing cluster members and learning about other cluster topology changes.
The HTTP protocol does not provide the required load balancing and failover features. The solution to this problem is a standard HTTP protocol feature: the proxy support.
A web proxy is a web server that receives HTTP requests and forwards them to another web server. You can use a web proxy to provide security, caching, and other features. For most web clients the proxy hides the real web servers that provide application content.
Having a front-end web proxy between users and the Jakarta EE application server is a common pattern for the following reasons:
Only the front-end web server needs to be accessible from the Internet, and the application servers are not exposed to direct attacks.
The front-end web server can serve static HTML pages, images, CSS, JavaScript, and other files, lifting some work from the application server.
The front-end web server can do SSL termination and processing, so application servers have more CPU cycles available to handle application logic.
A front-end web server hides different kinds of application servers and present them as a coherent set of URLs. For example, you can make a PHP application and a Java EE application look like they are part of the same web site.
Session affinity, also known as sticky sessions, is a web proxy mechanism that sends all requests from the same user to the same back-end web server. If you do not have session affinity, then user requests can go to different back-end web servers.
Hardware network load balancers, which work on the OSI model layers 3 and 4, can be used for a Jakarta EE application server clusters. Some networking hardware embeds a web proxy and can be configured to work as a layer 7 load balancer.
Jakarta EE web engines use an HTTP cookie to store a unique session ID. Only load balancers that can handle HTTP cookies can balance requests and keep session affinity at the same time. This is true for most web application runtimes, not just Jakarta EE.
To understand the need and advantages of session affinity, consider the impact on stateful and stateless applications, and whether they are clustered or non-clustered:
Stateless web applications should work correctly without session affinity because there is no user data to be preserved in an HTTP session object. Any back-end server can process any request and return the expected results.
Having a clustered web container makes no difference for stateless web applications because there is no user session data to be replicated. They are usually deployed as non-clustered web applications to a non-clustered back-end server farm under a load balancer for increased scalability.
Stateful web applications can work in a non-clustered environment only if the load balancer provides session affinity, because only one web back-end server has a specific user session data. Having a load balancer over a non-clustered web applications provides scalability advantages even if not providing transparent failover for the user: if a back-end server fails, all user sessions from the failed server are lost, but users can start again on a different server.
Stateful web applications in a clustered environment have scalability advantages and provide transparent failover for all users. They should also work correctly without session affinity because all back-end servers have access to all user session data, so any back-end server would be able to answer any request and provide the expected results.
Session affinity is an optional feature for many applications, but it has performance implications. Web applications usually perform faster and require less hardware resources with session affinity enabled because of data locality.
Even if an application works without session affinity, the application benefits from data locality because requests made by a user usually work with the same data. This data is cached by many layers, from the back-end web server to a possibly shared database instance. Having the same back-end web server performing the whole sequence maximizes the utilization of these caches.
Undertow is a full-featured web server, capable of replacing traditional web server software such as Apache Httpd and Microsoft IIS for most use cases. Undertow includes a web proxy component that can do load balancing.
Undertow is also a web server built specifically to support Jakarta EE specifications, and provides most features from Java web containers such as Apache Tomcat.
Undertow has two features related to JBoss EAP clustering:
The AJP protocol is a binary replacement for the text-based HTTP protocol. AJP uses long-lived persistent connections; HTTP connections are either single-request or short-lived. AJP was designed to lower the overhead caused by a front-end web server on users accessing a Java back-end server. A web browser uses HTTP to connect to the web proxy, and the web proxy uses AJP to connect to the back-end application servers.
The mod_cluster protocol allows a web proxy to dynamically discover back-end web servers and the applications they provide, allowing a true dynamic environment. It also employs an additional HTTP connection to send load metrics from each back-end application server to the web proxy so that it can make better load-balancing decisions.
Traditional web load balancer software requires static configuration to provide each back-end web server's connection details. The load balancer does not take back-end nodes failure into account, which increases response times and network overhead when routing traffic to nodes in a failed state. Also, adding more back-end servers requires reconfiguring and potentially restarting the load balancer. The following figure illustrates a traditional web proxy acting as a load balancer:
The previous features make the Jakarta EE application server as a back-end web server.
The mod_cluster module dynamically builds the back-end web server list. It also detects new cluster members, new deployed applications, or failed cluster members without needing manual configuration.
The mod_cluster protocol requires a client component, that should be implemented by the load balancer, and a server part, that is implemented by the JBoss EAP modcluster subsystem.
The following figure illustrates a mod_cluster enhanced web proxy acting as a load balancer:
In the previous figure, the web proxy + mod_cluster box can be either a JBoss EAP server instance with Undertow configured to enable mod_cluster, or a native web server with a mod_cluster plug-in.
Each JBoss EAP server with modcluster box is a back-end web server.
A mod_cluster enhanced web proxy such as undertow, sends advertisement messages to all the back-end web servers that listen to the multicast address and port. Back-end web servers reply by sending their connection parameters and the list of application context paths to the load balancer.
The architecture is fault-tolerant, there can be multiple web proxies with mod_cluster clients acting as load balancers. Back-end web servers receive advertisement messages from web proxies and replies to them.
In networks where multicast traffic is not allowed, advertising is disabled on the mod_cluster client. Each back-end web server is then manually configured with a list of web proxies. All of the back-end servers send information about their status and applications to the servers in the web server list. Even without multicast a mod_cluster load balancer requires no static configuration.
You can configure undertow to act as either a static, without mod_cluster, or dynamic, with mod_cluster load balancer. In either case it is usually configured as a dedicated JBoss EAP server instance, where no applications are deployed. This dedicated server instance is not a cluster member. You can use a server group consisting only of multiple dedicated load balancer JBoss EAP instances to prevent having a single point of failure.
Configuring undertow as a dynamic load balancer involves the following high-level steps:
Add a mod_cluster filter to the default undertow server configuration.
The JBoss EAP installation provides the standalone-load-balancer.xml and the load-balancer profiles to use in instances that only act as load balancers.
The standalone-load-balancer.xml configuration file is used for the standalone mode, and the load-balancer profile is used for the domain mode.
These profiles only use three subsystems: the logging, the io, and the undertow subsystems.
The undertow subsystem in these profiles contains the mod_cluster filter.
Configure the advertisement settings in both the undertow and the modcluster subsystems.
You have two options to make this configuration: setting the multicast parameters in both subsystems, or disabling the advertisement and multicast, and provide a proxy list to the modcluster subsystem.
By default, only the load-balancer profile contains the mod_cluster filter configuration.
To use with other profiles such as, ha or full-ha, then you need to create and configure the mod_cluster filter to use the correct multicast parameters.
To add the mod_cluster filter and configure it to default JBoss EAP settings, use the following command:
/profile=ha/subsystem=undertow/configuration=filter/mod_cluster=lb:add(\ management-socket-binding=http, advertise-socket-binding=modcluster)
The two attributes required by a mod_cluster filter are the following:
management-socket-binding
Informs undertow where to receive connection information and load balance metrics from the back-end web servers.
It should point to the socket-binding where JBoss EAP receives HTTP requests, which is http by default.
advertise-socket-binding
Informs undertow where to send advertisement messages, that is, the multicast address and UDP port, by referring to a socket-binding name.
After creating the filter, you must enable it in the desired undertow virtual hosts:
/profile=ha/subsystem=undertow/server=default-server/host=default-host/filter-ref=lb:add
Notice that lb is the name assigned to the mod_cluster filter defined in the previous command.
The socket binding groups called ha-sockets and full-ha-sockets already define the modcluster socket binding, which uses the 224.0.1.105 multicast address. and the 23364 port.
The undertow subsystem uses this socket binding to know where to send advertise messages.
The modcluster subsystem uses the same socket binding to know where to listen for advertise messages.
Red Hat recommends to change the multicast address to prevent undesired JBoss EAP instances to try to become load balancers for the clustered application server instances.
Red Hat recommends to configure an advertise key shared by the mod_cluster client and server. You can use the following command to configure the advertise key on the cluster members:
/profile=ha/subsystem=modcluster/mod_cluster-config=configuration:\ write-attribute(name=advertise-security-key,value=secret)
Run the following command to configure the key on the load balancer server instance:
/profile=ha/subsystem=undertow/configuration=filter/mod_cluster=lb:\ write-attribute(name=security-key,value=secret)
Notice that the previous commands affect different JBoss EAP server instances: the first one, on the modcluster subsystem, affects JBoss EAP instances that are members of a cluster.
The second one, on the undertow subsystem, affects the JBoss EAP instance that acts as the load balancer.
You can configure an undertow dynamic load balancer to not use multicast by disabling advertise on the mod_cluster filter.
This is done by setting the advertise-frequency attribute to zero and the advertise-socket-binding attribute to null.
/profile=ha/subsystem=undertow/configuration=filter/mod_cluster=lb:\ write-attribute(name=advertise-frequency,value=0)
/profile=ha/subsystem=undertow/configuration=filter/mod_cluster=lb:\ write-attribute(name=advertise-socket-binding,value=null)
In the previous example commands, lb is the name assigned to the mod_cluster filter.
Advertise also has to be disabled in cluster members modcluster subsystem so they do not listen for advertisement messages:
/profile=ha/subsystem=modcluster/mod_cluster-config=configuration:\ write-attribute(name=advertise,value=false)
You need to provide a proxy list to the cluster members, to inform them where the load balancer is. And you need to create an outbound socket binding from the load balancer. For example, the following command configures a single load balancer instance:
/socket-binding-group=ha-sockets/remote-destination-outbound-socket-binding=lb:\ add(host=10.1.2.3, port=8080)
The port in the outbound socket binding is the HTTP port of the load balancer JBoss EAP server instance.
Then, you can use these socket bindings to configure the proxies list on the modcluster subsystem:
/profile=ha/subsystem=modcluster/mod_cluster-config=configuration:\ write-attribute(name=proxies,value=[lb])
In the previous example, lb is the name that was assigned to the outbound socket binding.
Configuring undertow as a static load balancer involves the following high-level steps:
Add outbound socket bindings pointing to each cluster member.
Add a reverse-proxy handler to the default undertow server.
Add each cluster member to the proxy handler.
You must configure each cluster member IP address and AJP port as an outbound socket binding. For example, assuming there are two cluster members:
/socket-binding-group=ha-sockets/remote-destination-outbound-socket-binding=\ cluster-member1/:add(host=10.1.2.3, port=8009)
/socket-binding-group=ha-sockets/remote-destination-outbound-socket-binding=\ cluster-member2/:add(host=10.1.2.13, port=8009)
The reverse-proxy handler is created in the undertow subsystem:
/profile=ha/subsystem=undertow/configuration=handler/reverse-proxy=lb:add
Each cluster member is added as a route to the reverse-proxy handler by using a host child:
/profile=ha/subsystem=undertow/configuration=handler/reverse-proxy=lb/host=member1:\ add(outbound-socket-binding=cluster-member1,scheme=ajp,instance-id=hosta:server1,\ path=/app)
/profile=ha/subsystem=undertow/configuration=handler/reverse-proxy=lb/host=member2:\ add(outbound-socket-binding=cluster-member2,scheme=ajp,instance-id=hostb:server2,\ path=/app)
In the previous commands, each route instance-id attribute has to match the jboss.server.node system property for the referred node.
This system property usually takes the form host_name:server_name.
The path attribute value has to match the clustered application context path.
Enable the reverse-proxy handler on the desired undertow virtual hosts:
/profile=ha/subsystem=undertow/server=default-server/host=default-host/\ location=\/app:add(handler=lb)
In the previous example command, it was necessary to escape the / forward slash in /app by using a \ back slash, because the forward slash is part of the CLI syntax.
Notice that this example only configures the load balancer for a single clustered application with the /app context path.
To add more applications to the load balancer, you can add each one as a new host child with a different path attribute.
Jakarta EE web containers usually employ the JSESSIONID HTTP cookie to store a unique session identifier (ID), as mandated by the Servlet API specification.
This cookie value is an index to retrieve user session data stored in memory.
Undertow uses the session ID cookie to provide an efficient session affinity implementation.
Undertow appends the jboss.server.node system property to the session ID.
The load balancer intercepts the next requests from that browser, and send them to the same backend server.
The dynamic load balancer also uses an instance-id attribute in the same way a static load balancer does.
Mod_cluster creates load balancing configurations at runtime.
The cluster member modcluster subsystem provides all necessary information, including the jboss.server.node system property value.
Then, the mod_cluster filter creates a hidden reverse_proxy handler and configures its routes.
Earlier JBoss EAP releases had no undertow subsystem, and the embedded web server provided no proxy capabilities.
Red Hat supported a number of native web server plug-ins as load balancers, for many operating systems and web servers.
Some of them are still supported by Red Hat through the Red Hat JBoss Core Services product, which is included as part of the JBoss EAP subscription.
JBoss Core Services provides the following load balancers modules, based on Apache Httpd native modules for Linux, Windows and Solaris operating systems:
Mod_jk was the first module which implemented a static load balancer based on the AJP protocol. It was developed by the Apache Tomcat community. Mod_jk works with Microsoft IIS, but this particular configuration is not supported anymore as part of the EAP subscription.
These are the Apache Httpd native proxy support, and they can work as a static load balancer by using either HTTP or AJP.
This is the mod_cluster client for Apache Httpd. Mod_cluster provides a dynamic load balancer for Apache Httpd by using either HTTP or AJP.
Module for load balancing between Microsoft IIS web server proxies, and JBoss EAP back-end servers.
Configuration details for the load balancers follow the same concepts already presented for Undertow. For more information, see the references section.
For more information about mod_cluster attributes, see the ModCluster Subsystem Attributes section in the Configuration Guide in the Red Hat JBoss EAP documentation at https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.4/html-single/configuration_guide/index#mod_cluster-reference
For more information about using JBoss EAP as a load balancer, see the Configuring JBoss EAP as a Front-end Load Balancer section in the Configuration Guide in the Red Hat JBoss EAP documentation at https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.4/html-single/configuration_guide/index#configuring_jboss_eap_load_balancer
Undertow Project Community Documentation
For more information about native connectors for external web servers, see the Red Hat JBoss Core Services documentation at https://access.redhat.com/documentation/en/red-hat-jboss-core-services