Bug 25060 - Reloading context orphans currently open jndi datasource connections
Summary: Reloading context orphans currently open jndi datasource connections
Alias: None
Product: Tomcat 4
Classification: Unclassified
Component: Unknown (show other bugs)
Version: 4.1.27
Hardware: Other other
: P3 enhancement (vote)
Target Milestone: ---
Assignee: Tomcat Developers Mailing List
Depends on:
Reported: 2003-11-27 21:15 UTC by Wayne Schroeder
Modified: 2011-03-08 17:20 UTC (History)
0 users


Note You need to log in before you can comment on or make changes to this bug.
Description Wayne Schroeder 2003-11-27 21:15:10 UTC
I fiddled around with this a while and have determined to the best of my ability
that this is a real bug.  I have a jndi datasource connected to a postgresql
server.  I have two jndi resources (a reader and writer) so that later I can
implement a system with replication etc and deal with writes and reads to
different connections.  To summarize, after using the system, there are two
connections to postgres that get reused -- one reader and writer.  Under load,
this number increases and will slowly go back down.  I usually end up with two
idle connections (one reader and one writer) left under no load.  If you reload
the context where the datasource is at (it's a context specific datasource), the
number of connections will jump by two when used.  Each reload produces 2 more
connections min until I restart the server.  It appears that after a reload, the
'persisted connections' get abandoned / orphaned.  Eventually, I hit my max
connections and cannot aquire any more and the system fails.  I have tried the
abandond collection parameters and have added debug logging to my code to ensure
that I am indeed calling close on the connections I checkout, even on exceptions
and error cases.  Under normal useage without reloads, no connection leakage

This is on a solaris 8 machine with the 4.1.27-hotfix-22096.tar.gz applied.  Let
me know if more information is required.  I have this in a development
environment and can let somone attach in jdb and hammer on the thing since it's
not a production system -- if that will help in getting a repro.

Comment 1 Glenn Nielsen 2003-12-06 16:38:26 UTC
I have verified your bug report.  If this is happening with a
JNDI Named DataSource it might also be happening with other JNDI
resources created.  And be a source of a memory leak.

Great bug report.  I don't have a solution yet, but will be
researching this.

Thanks Wayne
Comment 2 Remy Maucherat 2003-12-06 16:47:57 UTC
I think the only clean way to "fix" this is to define the data source as global,
and use links. There's no API in JNDI for releasing resources, so ... (and I
don't want to introduce any proprietary APIs either).
Maybe waiting for GC would be the solution, and maybe there's a memory leak
associated with the JNDI context.
Comment 3 Glenn Nielsen 2003-12-07 17:34:01 UTC
Yes, it may be that when the JNDI resource is defined within
a Context and the context is reloaded or stopped, the previous
JNDI objects may still consume their resources including db
connections until they are GC'd.

At a minimum we need to verify whether the already existing resources
become eligible for GC after a reload/stop of a Context and there
is no memory/resource leak.

Even if there is no memory leak and the previous JNDI DataSource becomes
eligible for GC it might be nice to explicitly close the db connections.

If the DBCP DataSource is being used a possible solution to free up
resources would be a Context LifeCycle Listener that does a close
on the DBCP DataSource during a Context reload/stop.
Comment 4 Mark Thomas 2005-06-08 01:20:26 UTC
I have confirmed that with the latest code for 4.1.x that the orphaned
connections are eligible for garbage collection.

It is possible, but I have not confirmed it, that as a result of bug 20758 these
connections were not eligible for gc in 4.1.30 and earlier but this is only
speculation on my part.

I agree with Glenn that it is possible that some explicit clean-up could be
performed using a Context LifeCycle Listener but this is a 'nice to have'.
Therefore, since there is no memory leak here, I am changing this issue to an
Comment 5 Rafael Leite 2007-04-11 07:57:55 UTC
Hey Mark!
I respectfully disagree with you about this being a enhancement.
If the datasources are left to be garbage collected, their connections with the
database remain opened (as i just experienced on Tomcat 5.0.28). Since most
database installations rely on a maximum number of connections, the new
datasource resulting of the context's reload might not be able to connect to the
database, right? If these affirmations are right, IMHO this issue is a bug.
What do you think?
Thank you for your time! :)
Comment 6 Cyril Bonté 2011-02-05 16:32:22 UTC
Hi, I wanted to open a bug report but finally found this old one.

(In reply to comment #4)
> I agree with Glenn that it is possible that some explicit clean-up could be
> performed using a Context LifeCycle Listener but this is a 'nice to have'.
> Therefore, since there is no memory leak here, I am changing this issue to an
> enhancement.

I don't totally agree because some pool configurations can cause a memory leak (due to a thread leak). For example, when DBCP is configured with timeBetweenEvictionRunsMillis > 0, the thread won't stop at reload. After several reloads, PermGen becomes full. This is still true with Tomcat 7.
Comment 7 Mark Thomas 2011-03-08 17:20:28 UTC
A LifecycleListener isn't necessarily the best place for this and it would require explicit configuration.

I have added some clean-up for DataSource resources when naming resources are stopped but this is far from a generic solution for all resources (and neither is it meant to be). As Remy points out what is needed is a standard interface for releasing JNDI resources. DataSources are sufficiently widely used and the issues sufficiently problematic that I think it makes sense to address them.

The clean-up has been added to Tomcat 7 and will be included in 7.0.11 onwards.

Regarding the thread leak with Commons DBCP, that is a Commons DBCP bug although one that might be hard to fix.