Summary: | Idle eviction strategy could perform better (and is sometimes suboptimal) | ||
---|---|---|---|
Product: | Tomcat Modules | Reporter: | Bertrand Renuart <brenuart> |
Component: | jdbc-pool | Assignee: | Tomcat Developers Mailing List <dev> |
Status: | NEW --- | ||
Severity: | normal | CC: | brenuart |
Priority: | P2 | ||
Version: | unspecified | ||
Target Milestone: | --- | ||
Hardware: | PC | ||
OS: | Mac OS X 10.4 | ||
Attachments: | TestCase to show that the pool will fail to shrink |
Description
Bertrand Renuart
2014-07-31 21:36:22 UTC
Created attachment 31866 [details]
TestCase to show that the pool will fail to shrink
Nice analysis. I think you're on to something, here. The "depth" of the pool should end up, over time, being roughly equal to the number of connections actually required to serve the needs of the clients, *but no more*. Using a LIFO structure does in fact mean that the whole pool will stay "hot" regardless of how big it becomes... as long as the connections are used frequently enough. But the frequency of a check-out is not really relevant: only the number of connections that are used /simultaneously/ is relevant. A site may be able to get away with a pool with a depth of 2 or 3 connections, but if the pool grows to 50 connections, they all might stay in the pool and not be evicted due to the LIFO behavior. I'm not sure if the LIFO queue gives a performance benefit -- because the head of the structure is only under contention for check-outs and the tail of the structure is only under contention for check-ins -- but it's worth investigating. (In reply to Christopher Schultz from comment #2) > Using a > LIFO structure does in fact mean that the whole pool will stay "hot" > regardless of how big it becomes... as long as the connections are used > frequently enough. Surely you mean FIFO here? > But the frequency of a check-out is not really relevant: only the number of > connections that are used /simultaneously/ is relevant. A site may be able > to get away with a pool with a depth of 2 or 3 connections, but if the pool > grows to 50 connections, they all might stay in the pool and not be evicted > due to the LIFO behavior. Ditto > I'm not sure if the LIFO queue gives a performance benefit -- because the > head of the structure is only under contention for check-outs and the tail > of the structure is only under contention for check-ins Huh? Is that really true of a LIFO queue? Lets talk about MRU and LRU strategies instead - these terms seem to be less confusing in this context. In short, proposition is to follow a MRU strategy when checking out connections from the idle pool. This way, least recently used connections are the first to become eligible for eviction out of the pool. So either we talk about the strategy used to checkout/borrow connections or the one followed to determine which connections should be evicted. (In reply to Sebb from comment #3) > (In reply to Christopher Schultz from comment #2) > > > Using a > > LIFO structure does in fact mean that the whole pool will stay "hot" > > regardless of how big it becomes... as long as the connections are used > > frequently enough. > > Surely you mean FIFO here? Yes, I meant FIFO. Sorry for the confusing typo. > > But the frequency of a check-out is not really relevant: only the number of > > connections that are used /simultaneously/ is relevant. A site may be able > > to get away with a pool with a depth of 2 or 3 connections, but if the pool > > grows to 50 connections, they all might stay in the pool and not be evicted > > due to the LIFO behavior. > > Ditto > > > I'm not sure if the LIFO queue gives a performance benefit -- because the > > head of the structure is only under contention for check-outs and the tail > > of the structure is only under contention for check-ins > > Huh? Is that really true of a LIFO queue? A queue is FIFO, a stack LIFO. To align the nomenclature, here, queue is FIFO / LRU and stack is LIFO / MRU. For a stack/LIFO/MRU structure, there is only one "hot" end of the structure: both check-ins and check-outs are happening in the same place (the "top" of the stack). For a queue/FIFO/LRU structure, check-ins are happening at one end and check-outs are happening at the other end. So, depending upon the synchronization techniques used, the queue may offer better performance at the structure level -- even if the stack model gives a better outcome. It was just a thought. The contention issue is a red herring. Both a queue and a stack will normally have to be completely locked when inserting or removing items, since there may only be zero or one entries in the container at the time of the operation. This will almost always outweigh any cache ping-ponging issues that could occur with a stack. (In reply to Chuck Caldarale from comment #6) The pool currently uses java.util.concurrent.ArrayBlockingQueue unless a "fair" implementation is requested (in which case a custom implementation is used). A stack/LIFO/MRU strategy could be built on top of a java.util.concurrent.LinkedBlockingDeque. I'm not an expert in concurrency issues but both implementations seem to make use of a single global lock for their put/get operations. We should therefore get the same performances except may be for a small penalty for the LinkedBlockingDeque due to the extra overhead of creating Nodes. I made a quick prototype by changing the idle pool into a LinkedBlockingDeque and ran about 100 concurrent threads asking for 10 thousands connections each. At first glance the difference between the two approaches is minimal. I haven't got any further though because you may already have been doing tests in that direction... |