Bug 57544 - Server CPU maxed out (100% per core) randomly after 10 to 30 hours
Summary: Server CPU maxed out (100% per core) randomly after 10 to 30 hours
Status: RESOLVED FIXED
Alias: None
Product: Tomcat 8
Classification: Unclassified
Component: Catalina (show other bugs)
Version: 8.0.18
Hardware: PC Linux
: P2 regression (vote)
Target Milestone: ----
Assignee: Tomcat Developers Mailing List
URL:
Keywords:
: 60578 60581 (view as bug list)
Depends on:
Blocks:
 
Reported: 2015-02-06 17:36 UTC by Xavier F. Kral
Modified: 2017-02-16 00:11 UTC (History)
3 users (show)



Attachments
To configure the JVM JDK 1.8 running Tomcat 8.0.18 (433 bytes, text/plain)
2015-02-06 17:50 UTC, Xavier F. Kral
Details
Minimum configuration of the webapp that exhibits the issue (1.63 KB, text/plain)
2015-02-06 17:52 UTC, Xavier F. Kral
Details

Note You need to log in before you can comment on or make changes to this bug.
Description Xavier F. Kral 2015-02-06 17:36:06 UTC
"uniwar-exec-18" #59 daemon prio=5 os_prio=0 tid=0x000000000096a000 nid=0x58da runnable [0x000000004457c000]
   java.lang.Thread.State: RUNNABLE
        at org.apache.coyote.http11.AbstractInputBuffer.nextRequest(AbstractInputBuffer.java:298)
        at org.apache.coyote.http11.AbstractNioInputBuffer.nextRequest(AbstractNioInputBuffer.java:150)
        at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1152)
        at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:659)
        at org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:223)
        at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1558)
        at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1515)
        - locked <0x00000006771385a8> (a org.apache.tomcat.util.net.NioChannel)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
        at java.lang.Thread.run(Thread.java:745)
Comment 1 Xavier F. Kral 2015-02-06 17:49:32 UTC
We have a dedicated self-managed server running Ubuntu with Java 8 and Tomcat. That's all. Since we upgraded from Tomcat 7 to Tomcat 8.0.18, the server CPU cores becomes maxed out after a dozen of hours. The server has a light load (5 requests per seconds). There are plenty of memory available which seem to exclude the possibility that the GC is taking all the CPU time.

Example of normal CPU usage (Tomcat 7 and Tomcat 8 during first hours)
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND                           CURRENT TIME
root     21902  6.7 16.9 5131664 1391624 ?     Sl   Feb01 336:48 /usr/lib/jvm/jdk1.8.0_25/bin/java 09:48 Feb-5

Example of ABNORMAL CPU usage, notice the 101% instead of 6% (Tomcat 8.0.18 after a dozen of hours)
root     22675  101 19.2 8339292 1574312 pts/0 Sl   Feb05 720:49 /usr/lib/jvm/jdk1.8.0_25/bin/java

$java -version
java version "1.8.0_25"
Java(TM) SE Runtime Environment (build 1.8.0_25-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.25-b02, mixed mode)

When doing a jstack, notice that 2 of the executor threads NEVER change, many other executor threads are added and removed as expected but these 2 RUNNABLE threads seem to be blocked in an infinite loop in AbstractInputBuffer.nextRequest(AbstractInputBuffer.java:298)
$jstack 22675 | grep runnable
ABNORMAL==>"uniwar-exec-18" #59 daemon prio=5 os_prio=0 tid=0x000000000096a000 nid=0x58da runnable [0x000000004457c000]
ABNORMAL==>"uniwar-exec-10" #50 daemon prio=5 os_prio=0 tid=0x00007fb945175800 nid=0x58d1 runnable [0x0000000043c73000]
"http-nio-8080-Acceptor-0" #40 daemon prio=5 os_prio=0 tid=0x00007fb955838000 nid=0x58c8 runnable [0x000000004336a000]
"http-nio-8080-ClientPoller-1" #39 daemon prio=5 os_prio=0 tid=0x00007fb95598a000 nid=0x58c7 runnable [0x0000000043269000]
"http-nio-8080-ClientPoller-0" #38 daemon prio=5 os_prio=0 tid=0x00007fb955989000 nid=0x58c6 runnable [0x0000000043168000]
"ObjectServerDaemon" #35 daemon prio=5 os_prio=0 tid=0x0000000002d28000 nid=0x58c3 runnable [0x0000000042f66000]
"UniWarServerDaemon" #34 daemon prio=5 os_prio=0 tid=0x0000000002d25800 nid=0x58c2 runnable [0x0000000042e65000]
"NioBlockingSelector.BlockPoller-1" #15 daemon prio=5 os_prio=0 tid=0x00007fb95449f000 nid=0x58ad runnable [0x00000000404a9000]
"Service Thread" #10 daemon prio=9 os_prio=0 tid=0x00007fb954127800 nid=0x58a9 runnable [0x0000000000000000]
"Signal Dispatcher" #5 daemon prio=9 os_prio=0 tid=0x00007fb954107800 nid=0x58a4 runnable [0x0000000000000000]
"main" #1 prio=5 os_prio=0 tid=0x000000000060c800 nid=0x5894 runnable [0x000000004078c000]
"VM Thread" os_prio=0 tid=0x00007fb9540c6800 nid=0x58a0 runnable
"Gang worker#0 (Parallel GC Threads)" os_prio=0 tid=0x000000000061d800 nid=0x5895 runnable
"Gang worker#1 (Parallel GC Threads)" os_prio=0 tid=0x000000000061f800 nid=0x5896 runnable
"Gang worker#2 (Parallel GC Threads)" os_prio=0 tid=0x0000000000621000 nid=0x5897 runnable
"Gang worker#3 (Parallel GC Threads)" os_prio=0 tid=0x0000000000623000 nid=0x5898 runnable
"Gang worker#4 (Parallel GC Threads)" os_prio=0 tid=0x00007fb954001800 nid=0x5899 runnable
"Gang worker#5 (Parallel GC Threads)" os_prio=0 tid=0x00007fb954003800 nid=0x589a runnable
"Gang worker#6 (Parallel GC Threads)" os_prio=0 tid=0x00007fb954005000 nid=0x589b runnable
"Gang worker#7 (Parallel GC Threads)" os_prio=0 tid=0x00007fb954007000 nid=0x589c runnable
"Concurrent Mark-Sweep GC Thread" os_prio=0 tid=0x00007fb954078000 nid=0x589f runnable
"Gang worker#0 (Parallel CMS Threads)" os_prio=0 tid=0x00007fb954074800 nid=0x589d runnable
"Gang worker#1 (Parallel CMS Threads)" os_prio=0 tid=0x00007fb954076000 nid=0x589e runnable

The command below is to exclude the possibility that the GC is overloaded.
$jmap -heap 22675
Attaching to process ID 22675, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 25.25-b02

using parallel threads in the new generation.
using thread-local object allocation.
Concurrent Mark-Sweep GC

Heap Configuration:
   MinHeapFreeRatio         = 40
   MaxHeapFreeRatio         = 70
   MaxHeapSize              = 6442450944 (6144.0MB)
   NewSize                  = 697892864 (665.5625MB)
   MaxNewSize               = 697892864 (665.5625MB)
   OldSize                  = 375848960 (358.4375MB)
   NewRatio                 = 2
   SurvivorRatio            = 8
   MetaspaceSize            = 21807104 (20.796875MB)
   CompressedClassSpaceSize = 1073741824 (1024.0MB)
   MaxMetaspaceSize         = 17592186044415 MB
   G1HeapRegionSize         = 0 (0.0MB)

Heap Usage:
New Generation (Eden + 1 Survivor Space):
   capacity = 628162560 (599.0625MB)
   used     = 529591368 (505.0576858520508MB)
   free     = 98571192 (94.00481414794922MB)
   84.30801224447379% used
Eden Space:
   capacity = 558432256 (532.5625MB)
   used     = 514006808 (490.1950912475586MB)
   free     = 44425448 (42.367408752441406MB)
   92.04461283841025% used
From Space:
   capacity = 69730304 (66.5MB)
   used     = 15584560 (14.862594604492188MB)
   free     = 54145744 (51.63740539550781MB)
   22.349766322544642% used
To Space:
   capacity = 69730304 (66.5MB)
   used     = 0 (0.0MB)
   free     = 69730304 (66.5MB)
   0.0% used
concurrent mark-sweep generation:
   capacity = 556462080 (530.68359375MB)
   used     = 11588848467912 MB
   free     = 6294955703176142416 (6.003337577034133E12MB)
   -1.131246122398795E12% used

27262 interned Strings occupying 3141208 bytes.
Comment 2 Xavier F. Kral 2015-02-06 17:50:45 UTC
Created attachment 32438 [details]
To configure the JVM JDK 1.8 running Tomcat 8.0.18
Comment 3 Xavier F. Kral 2015-02-06 17:52:00 UTC
Created attachment 32439 [details]
Minimum configuration of the webapp that exhibits the issue
Comment 4 Mark Thomas 2015-02-06 17:53:32 UTC
Yep. If you enter that loop with pos == 0 it is going to enter an infinite loop.

The loop doesn't need to be there anyway. System.arrayCopy() is smart enougth to handle the problem the loop is trying to avoid.
Comment 5 Mark Thomas 2015-02-06 18:03:54 UTC
This has been fixed in trunk (9.0.x development), 8.0.x for 8.0.19 onwards and 7.0.x for 7.0.60 onwards.
Comment 6 neet.musafir 2015-08-17 17:16:40 UTC
hey mark, upgraded tomcat from 6.0. to  7.0.57 and now cpu sarts  spiking few hrs after tomcat startup....how can i verify if its hit by bug#57544..

regards,ne
Comment 7 Ralf Hauser 2017-01-12 12:18:05 UTC
see also Bug 60578
Comment 8 Mark Thomas 2017-01-12 13:25:38 UTC
*** Bug 60578 has been marked as a duplicate of this bug. ***
Comment 9 Mark Thomas 2017-01-12 17:34:54 UTC
*** Bug 60581 has been marked as a duplicate of this bug. ***