Bug 50078 - Concurrent access to WeakHashMap in ConcurrentCache causes infinite loop, 100% CPU usage
Summary: Concurrent access to WeakHashMap in ConcurrentCache causes infinite loop, 100...
Alias: None
Product: Tomcat 6
Classification: Unclassified
Component: Catalina (show other bugs)
Version: unspecified
Hardware: PC Linux
: P2 normal (vote)
Target Milestone: default
Assignee: Tomcat Developers Mailing List
Depends on:
Reported: 2010-10-12 01:10 UTC by Takayoshi Kimura
Modified: 2014-04-16 09:36 UTC (History)
1 user (show)

Proposed patch for tc6 trunk (1.71 KB, application/octet-stream)
2010-10-12 01:10 UTC, Takayoshi Kimura

Note You need to log in before you can comment on or make changes to this bug.
Description Takayoshi Kimura 2010-10-12 01:10:00 UTC
Created attachment 26163 [details]
Proposed patch for tc6 trunk

There is a WeakHashMap instance that is accessed concurrently and sometimes causes infinite loop. It's extremely hard to reproduce but you can refer to similar concurrent access looping problem by the following search keywords:

* weakhashmap infinite loop 100% cpu stuck synchronized concurrent
* java.util.WeakHashMap.put(WeakHashMap.java:405)
* java.util.WeakHashMap.get(WeakHashMap.java:355)

The org.apache.el.util.ConcurrentCache and javax.el.BeanELResolver.ConcurrentCache classes have this problem.

There are 20 threads get stuck at the following thread stack:

"ajp-" daemon prio=10 tid=0x00002aab6425d800 nid=0x135b runnable [0x0000000048c14000]
   java.lang.Thread.State: RUNNABLE
	at java.util.WeakHashMap.get(WeakHashMap.java:355)
	at org.apache.el.util.ConcurrentCache.get(ConcurrentCache.java:24)
	at org.apache.el.lang.ExpressionBuilder.createNodeInternal(ExpressionBuilder.java:90)

"ajp-" daemon prio=10 tid=0x00002aab643ea000 nid=0x538d runnable [0x00000000458fd000]
   java.lang.Thread.State: RUNNABLE
	at java.util.WeakHashMap.put(WeakHashMap.java:405)
	at java.util.WeakHashMap.putAll(WeakHashMap.java:518)
	at org.apache.el.util.ConcurrentCache.put(ConcurrentCache.java:34)
Comment 1 Remy Maucherat 2010-10-12 11:46:18 UTC
Ooops, this needs to be fixed ASAP.

The map sizes need to be made configurable I think (with the sync, 5000 might be too low for big setups and would result in a lot of syncs).
Comment 2 Mark Thomas 2010-10-14 13:40:24 UTC
Thanks for the patch. I have fixed in this in trunk and that fix will be in 7.0.4 onwards. I added the ability to control the cache sizes.
The patches have also been proposed for 6.0.x.
Comment 3 Mark Thomas 2010-10-25 12:38:14 UTC
This has been fixed in 6.0.x and will be included in 6.0.30 onwards.