Bug 62958 - too many open files if server sits idle
Summary: too many open files if server sits idle
Status: RESOLVED DUPLICATE of bug 62924
Alias: None
Product: Tomcat 9
Classification: Unclassified
Component: Catalina (show other bugs)
Version: 9.0.13
Hardware: PC Linux
: P2 normal (vote)
Target Milestone: -----
Assignee: Tomcat Developers Mailing List
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2018-11-27 20:09 UTC by shawn.durbin
Modified: 2018-11-27 20:11 UTC (History)
0 users



Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description shawn.durbin 2018-11-27 20:09:28 UTC
We are suspecting the 9.0.13 upgrade introduces an open file leak.

Steps to display the problem:
1.	Download apache-tomcat-9.0.13.tar.gz
2.	gunzip apache-tomcat-9.0.13.tar.gz
3.	tar -xf apache-tomcat-9.0.13.tar
4.	cd apache-tomcat-9.0.13/bin
5.	./startup.sh
6.	ps -ef | grep tomcat
7.	Grab the PID# from ps output
8.	Execute the following command every 10 seconds or so and watch the number of open files rise.
a.	ls -l /proc/<PID#>/fd | grep -i “tomcat-users”

Actual commands/output:
[root@wso2-as-001 bin]# ls -l /proc/1973/fd | grep -i "tomcat-users"
lr-x------. 1 root root 64 Nov 27 11:13 71 -> /home/sdurbin@redcedarill.com/apache-tomcat-9.0.13/conf/tomcat-users.xml
lr-x------. 1 root root 64 Nov 27 11:13 72 -> /home/sdurbin@redcedarill.com/apache-tomcat-9.0.13/conf/tomcat-users.xml
lr-x------. 1 root root 64 Nov 27 11:13 73 -> /home/sdurbin@redcedarill.com/apache-tomcat-9.0.13/conf/tomcat-users.xml
[root@wso2-as-001 bin]# ls -l /proc/1973/fd | grep -i "tomcat-users"
lr-x------. 1 root root 64 Nov 27 11:13 71 -> /home/sdurbin@redcedarill.com/apache-tomcat-9.0.13/conf/tomcat-users.xml
lr-x------. 1 root root 64 Nov 27 11:13 72 -> /home/sdurbin@redcedarill.com/apache-tomcat-9.0.13/conf/tomcat-users.xml
lr-x------. 1 root root 64 Nov 27 11:13 73 -> /home/sdurbin@redcedarill.com/apache-tomcat-9.0.13/conf/tomcat-users.xml
lr-x------. 1 root root 64 Nov 27 11:13 74 -> /home/sdurbin@redcedarill.com/apache-tomcat-9.0.13/conf/tomcat-users.xml

Similar steps can be followed with apache-tomcat-9.0.12.tar.gz to show that this did not previously happen.
This caused our tomcat to essentially “hang” as it idled over night. While idling, our number of open files surpassed our previously configured limit of 4096.
Note that we only notice this issue while the server remains idle. The deployment/undeployment of applications clean these open files up, then they start rebuilding again.
Also note that we have implemented a work around by altering our unit file to contain configurations to set the limit of the process to 65535 open files.

System information:
[root@wso2-as-001 bin]# cat /etc/centos-release
   CentOS Linux release 7.5.1804 (Core)
[root@wso2-as-001 bin]# uname -a
   Linux wso2-as-001 3.10.0-862.14.4.el7.x86_64 #1 SMP Wed Sep 26 15:12:11 UTC 2018 
   x86_64 x86_64 x86_64 GNU/Linux
[root@wso2-as-001 bin]# java -version
   java version "1.8.0_191"
   Java(TM) SE Runtime Environment (build 1.8.0_191-b12)
   Java HotSpot(TM) 64-Bit Server VM (build 25.191-b12, mixed mode)

Process limit information:
[root@wso2-as-001 bin]# cat /proc/1973/limits
Limit                     Soft Limit           Hard Limit           Units
Max cpu time              unlimited            unlimited            seconds
Max file size             unlimited            unlimited            bytes
Max data size             unlimited            unlimited            bytes
Max stack size            8388608              unlimited            bytes
Max core file size        0                    unlimited            bytes
Max resident set          unlimited            unlimited            bytes
Max processes             7262                 7262                 processes
Max open files            4096                 4096                 files
Max locked memory         65536                65536                bytes
Max address space         unlimited            unlimited            bytes
Max file locks            unlimited            unlimited            locks
Max pending signals       7262                 7262                 signals
Max msgqueue size         819200               819200               bytes
Max nice priority         0                    0
Max realtime priority     0                    0
Max realtime timeout      unlimited            unlimited            us
Comment 1 Mark Thomas 2018-11-27 20:11:00 UTC

*** This bug has been marked as a duplicate of bug 62924 ***