Bug 52002 - Pool re-opens and re-issues closed connection
Pool re-opens and re-issues closed connection
Status: RESOLVED FIXED
Product: Tomcat Modules
Classification: Unclassified
Component: jdbc-pool
unspecified
PC All
: P2 normal with 5 votes (vote)
: ---
Assigned To: Tomcat Developers Mailing List
:
Depends on:
Blocks:
  Show dependency tree
 
Reported: 2011-10-10 13:42 UTC by Kevin Greiner
Modified: 2012-03-27 15:12 UTC (History)
1 user (show)



Attachments
Patch to create new test case and fix to make it work (3.25 KB, application/octet-stream)
2011-10-10 13:42 UTC, Kevin Greiner
Details
Patch to create new test case and fix. (6.52 KB, patch)
2011-11-01 00:02 UTC, Kevin Greiner
Details | Diff

Note You need to log in before you can comment on or make changes to this bug.
Description Kevin Greiner 2011-10-10 13:42:25 UTC
Created attachment 27751 [details]
Patch to create new test case and fix to make it work

Closing a connection returns the connection to the pool but does not clear its handler (the JDBC interceptor chain).  As a result, the next request for a connection may re-open and return the old handler.

This behavior breaks multi-threaded code which closes a connection twice.  The exact behavior being that thread 1 closes a connection, thread 2 opens a new connection and gets the exact same connection object as thread 1, thread 1 then closes is connection a second time thereby breaking thread 2.

I'm attaching a patch with a new test case which fails against the current revision as well as a fix for ConnectionPool.
Comment 1 Kevin Greiner 2011-11-01 00:02:55 UTC
Created attachment 27879 [details]
Patch to create new test case and fix.

This patch directly addresses the defect in the ConnectionPool's setupConnection method.  This method creates a new proxy instance each time it is called.  However, unlike an ordinary object where each instance is unique, proxy objects store their state in their InvocationHandler object. In the original implementation, calls to getConnection() returned a new proxy instance and, as a side effect, returned all previous proxies to the same PooledConnection to an open state.

In this patch, I've created a new interceptor (ProxyCutOffConnection) to act as a unique, and disposable, head to a shared interceptor chain.  Each proxy instance created by setConnection uses its own unique ProxyCutOffConnection as its handler.  When 'close' is called for the first time, ProxyCutOffConnection first passes it to the next interceptor to perform all normal close logic then it severs its connection with that interceptor.  As a result, all future calls using this proxy object irrevocably fail with a connection closed error.
Comment 2 Rob Moore 2011-12-07 19:35:18 UTC
We're seeing behavior in a production application that seems likely to be a result of this issue. Any chance it might be included in the next release?
Comment 3 Filip Hanik 2012-03-20 14:19:48 UTC
(In reply to comment #2)
> We're seeing behavior in a production application that seems likely to be a
> result of this issue. Any chance it might be included in the next release?

I'll be adding the patch, but not enabled by default. Performance is at cost if new objects are created each time an object is used from the pool. This goes against the very idea that started this pool. So I will put in a flag to throttle the behavior
Comment 4 Filip Hanik 2012-03-20 15:47:53 UTC
Fixed in r1302948
Comment 5 Rob Moore 2012-03-20 22:58:32 UTC
(In reply to comment #3)
> (In reply to comment #2)
> > We're seeing behavior in a production application that seems likely to be a
> > result of this issue. Any chance it might be included in the next release?
> 
> I'll be adding the patch, but not enabled by default. Performance is at cost if
> new objects are created each time an object is used from the pool. This goes
> against the very idea that started this pool. So I will put in a flag to
> throttle the behavior

I'm not trying to suggest that the you should compromise your design goals by introducing this change. Is there an alternative to address the scenario outlined in the first comment by Kevin?
Comment 6 Kevin Greiner 2012-03-20 23:45:38 UTC
(In reply to comment #3)
> I'll be adding the patch, but not enabled by default.

I would recommend enabling by default with your flag's documentation clearly stating what a program must do, or not do, to safely disable.  Most users will expect the connection pool's default configuration to be absolutely reliable.  If it isn't, they'll simply discard it for another solution.  It is only after they have gained confidence in using it that they will look at the fine print on how to customize it.
Comment 7 Filip Hanik 2012-03-21 18:28:34 UTC
(In reply to comment #6)
> (In reply to comment #3)
> > I'll be adding the patch, but not enabled by default.
> 
> I would recommend enabling by default with your flag's documentation clearly
> stating what a program must do, or not do, to safely disable.  Most users will
> expect the connection pool's default configuration to be absolutely reliable. 
> If it isn't, they'll simply discard it for another solution.  It is only after
> they have gained confidence in using it that they will look at the fine print
> on how to customize it.

let me run some performance tests with and without it, if it performs well with it enabled, I see no issue with it. If performance is worse, then it breaks the main goal of the component, that is performance.
Comment 8 Filip Hanik 2012-03-27 15:12:25 UTC
The Facade is now enabled by default

Fixed in 
 r1305861 
 r1305862 
 r1305866