When acceptorThreadCount > 1, maxConnections not honors config, this affects BIO & NIO connector (others don't know). ---------- test config begin -------- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="300" minSpareThreads="4"/> <Connector port="9993" protocol="org.apache.coyote.http11.Http11Protocol" URIEncoding="ISO-8859-1" enableLookups="false" acceptorThreadCount="2" executor="tomcatThreadPool" acceptCount="1" maxConnections="1" /> <Connector port="9994" protocol="org.apache.coyote.http11.Http11NioProtocol" URIEncoding="ISO-8859-1" enableLookups="false" acceptorThreadCount="2" executor="tomcatThreadPool" acceptCount="1" maxConnections="1" /> ---------- test config end ---------- ---- Test-1 (acceptorThreadCount="1") --- OK --- $ ab2 -n 20000 -c 1000 http://localhost:999x/ $ netstat -atn | grep :999x | grep ESTABLISHED This show range 3-5 connections (acceptable number) ---- Test-2 (acceptorThreadCount="2") --- KO --- $ ab2 -n 20000 -c 1000 http://localhost:999x/ $ netstat -atn | grep :999x | grep ESTABLISHED This show +100 connections > "OOPS", too much far ---- Logs show traces like this when socket closes: ----- catalina.out begin ----- May 22, 2011 9:10:51 PM org.apache.tomcat.util.net.AbstractEndpoint countDownConnection WARNING: Incorrect connection count, multiple socket.close called on the same socket. ----- catalina.out end -------
I'd strongly recommend to default acceptorThreadCount=1 and possible deprecating the attribute all together. Acceptor thread count larger than 1 has really zero impact on performance, while degrading system resources. The operating system holds a shared lock underneath to accept new connections, so having multiple threads calling ServerSocket.accept is doing nothing except queuing up for that lock.
I agree with Filip's comment re forcing acceptorThreadCount to 1 but I'd still like to get to the bottom of why the maxConnections limit isn't being enforced. I'll try and look at this today.
It looks like there's a race condition between the acceptor thread being permitted to accept a connection and updating the connection counter. Two acceptor threads can pass the awaitConnection() condition, accept a connection, and then both call countUp(). The connection count then goes above the signal level and awaitConnection() blocking condition will never be met as long as the connections stays above max (since CounterLatch appears to be designed to count both up and down, it compares the signal level exactly). The fix could be to remove countUp()/countDown(), change CounterLatch.await() to an CounterLatch.awaitAndIncrement()/CounterLatch.awaitAndDecrement() pair and have the connection count atomically updated in Sync.tryAcquireShared() using AtomicLong.compareAndSet() with the +1/-1 delta passed in as the argument. e.g: protected int tryAcquireShared(int delta) { while (true) { final long current = count.get(); if (!released && (current == signal)) { return -1; } if (count.compareAndSet(current, current + delta)) { return 1; } } }
I ended up replacing CounterLatch with LimitLatch that has reduced functionality that is aligned more closely with what the connectors need. I also made maxConnections dynamically configurable. I'll look into removing / deprecating the acceptorThreadCount attribute next. This is fixed in 7.0.x and will be included in 7.0.15. Note: If testing with ab be aware that the TCP backlog will cause more established connections to be observed than Tomcat is currently handling.
On reflection I decided to leave the acceptorThreadCount configuration option. It already defaults to one and since the acceptor thread does more the just call socket.accept() if there is a spike in new connections, it is possible that multiple acceptor threads may offer some limited benefit.