The main ASF Bugzilla instance will be unavailable for 4 hours starting 19.00 UTC 2016-10-29 for an upgrade to 5.0.3
Bug 32317 - Making mod_jk replication aware (Clustering Support)
Making mod_jk replication aware (Clustering Support)
Product: Tomcat Connectors
Classification: Unclassified
Component: Common
All All
: P2 enhancement (vote)
: ---
Assigned To: Tomcat Developers Mailing List
Depends on:
  Show dependency tree
Reported: 2004-11-19 13:53 UTC by Rainer Jung
Modified: 2008-10-05 03:13 UTC (History)
0 users

Add hierarchical routing to load balancer (17.79 KB, patch)
2004-12-05 22:27 UTC, Rainer Jung
Details | Diff
See former patch - now produced with diff -u (16.30 KB, patch)
2004-12-06 20:41 UTC, Rainer Jung
Details | Diff

Note You need to log in before you can comment on or make changes to this bug.
Description Rainer Jung 2004-11-19 13:53:50 UTC
Making mod_jk replication aware (Clustering Support)

At the moment mod_jk makes it's load-balancing decisions to workers according to
the following parameters:

- session-stickyness (choose the "only" right worker)
- locality (one worker is preferred)
- weight-based

What is missing, is any information about secondary cluster nodes.

Use Case 1: Horizontal scalability and performance of tomcat cluster

Tomcat cluster does only allow session replication to all nodes in the cluster.
Once you work with more than 3-4 nodes there is too much overhead and risk in
replicating sessions to all nodes. We split all nodes into clustered groups. I
introduced a new worker attribute "domain" to let get_most_suitable_worker in
mod_jk know, to which other nodes a session gets replicated (all workers with
the same value in the domain attribute). So a load balancing worker knows, on
which nodes the session is alive. If a node fails or is being taken down
administratively, mod_jk chooses another node that has a replica of the session.

Use Case 2: Prevent thread count explosion

Once mod_jk connects an apache process to a tomcat instance, the tomcat jk
connector will need one thread for this connection as long as the apache process
is alive, independent from the number of requests apache actually sends to tomcat.

Now assume you have multiple Apaches and Tomcats. The Tomcats are clustered and
mod_jk uses sticky sessions. Now you are going to shut down (maintenance) one
tomcat. All Apache will start connections to all tomcats. You end up with all
tomcats getting connections from all apache processes, so the number of threads
needed inside the tomcats will explode.

If you group the tomcats to domain as in use case 1, the connections normally
will stay inside the domain and you will need much less threads. 


The implementation uses and additional worker attribute domain, and if a
session has a jvmRoute, is sticky and the correct worker is in error state
another worker with the same domain attribute is chosen.

I have an implementation based on mod_jk 1.2.6 running successfully in
production, but I would have to adapt to the 1.2.7 code changes. The
implementation would only concern common/jk_util.c for the additional
configuration worker attribute "domain" and common/jk_lb_worker.c for using the
   - with stickyness in case the primary worker is down to first seek for some
other worker with the same domain attribute. 
   - without stickyness to decide to which workers an existing session is
allowed to be balanced (all workers with the same domain as the worker given in
the session id)

I could provide the code, if you are interested.

Here is a more concrete example:

Enterprise application with redundant internet connections A and B.
A consists of two Apache instances A.a1 und A.a2, B of B.a1 and B.a2.
Behind are 4 Tomcat A.t1, A.t2, B.t1, B.t2.

A.t1 and A.t2 are clustered, B.t1 and B.t2 are clustered. mod_jk uses load
balancing with sticky sessions.

All Apaches can connect to any Tomcat, but A.t1 is local for A.a1, A.t2
for A.a2, B.t1 for B.a1 and B.t2 for B.a2:

A.a1   A.a2   B.a1   B.a2
 ||  X  ||  X  ||  X  ||
A.t1---A.t2   B.t1---B.t2

A.t1 and A.t2 are put into the same "domain" "A" in,
B.t1 and B.t2 are put into the same "domain" "B" in

Now if you shutdown e.g. tomcat B.t1 for service/update (or if it breaks)
All apaches will know from the domain configuration, that sticky requests
for B.t1 have to go to B.t2. This is important, since only on that tomcat
the replicated sessions from B.t1 will exist.

Without domains you have to put all the Tomcats in one cluster. But then
all sessions are replicated to all tomcats. We have a production side
using 3x3=9 tomcats and a cluster with 9 nodes would mean too much
Comment 1 Peter Rossbach 2004-11-21 09:53:35 UTC
Very fine idea, please send the patches!

Comment 2 Rainer Jung 2004-12-05 22:27:28 UTC
Created attachment 13650 [details]
Add hierarchical routing to load balancer

The attached patch concerns jk_lb_worker.c and jk_util.c.
It introduces the following features:

- additional property "domain" for workers in
- Make routing decision hierarchical:
a) sticky worker if not in error
b) else worker from same domain
c) else local worker
d) else local domain (worker inside domain of one of the local workers)
e) any other worker

- erroneous non-sticky workers get chosen after recovery time only if their
lb_value is maximum. Avoids continously decrementing lb_value for erroneous

Patch was transformed from a patch for version 1.2.6. I did not yet have the
time to compile and test, but I know that at least Peter waits for it, so I
submit it untested.

It's predecessor for 1.2.6 has been tested and in production for several weeks
Comment 3 Mladen Turk 2004-12-06 09:35:58 UTC

Could you please run unified diff ('diff -u')?
I can not use the patch otherwise.

Comment 4 Rainer Jung 2004-12-06 20:41:16 UTC
Created attachment 13658 [details]
See former patch - now produced with diff -u

Now in diff -u format. Had to update Solaris diff to gnu diff.
Comment 5 Mladen Turk 2004-12-07 13:26:02 UTC
Commited with small cosmetic modifications.
Thanks Rainer!

It would be great if you could write some docs or
configuration examples
Comment 6 Rainer Jung 2004-12-07 18:19:15 UTC
Thank's a lot for accepting the patch. I hope it will not make too much trouble
and I'm sure it will be helpful for really useful bigger cluster designs.

Documentation and examples config: I will provide in the next 5 days. Code
without documentation is not very useful.