Summary: | JK status manager: mass nodes handling doesn't works | ||
---|---|---|---|
Product: | Tomcat Connectors | Reporter: | Jan Stefl <jstefl> |
Component: | mod_jk | Assignee: | Tomcat Developers Mailing List <dev> |
Status: | RESOLVED FIXED | ||
Severity: | normal | CC: | mail-asf-bugzilla |
Priority: | P2 | ||
Version: | 1.2.36 | ||
Target Milestone: | --- | ||
Hardware: | PC | ||
OS: | Linux | ||
Attachments: | Configurations and log files |
Can you please check version 1.2.37 plus the following patch: http://svn.apache.org/viewvc?view=revision&revision=1354021 Please report back whether the problem is fixed with this. Regards, Rainer This (In reply to comment #1) > Can you please check version 1.2.37 plus the following patch: > > http://svn.apache.org/viewvc?view=revision&revision=1354021 > > Please report back whether the problem is fixed with this. > > Regards, > > Rainer This patch resolves jk-manager activation issues for me. Please merge. The patch resolved this for me as well. Does no one else use mod_jk to balance between multiple tomcat backends? Fixed in version 1.2.39. |
Created attachment 29263 [details] Configurations and log files I have one HTTPD server balanced 2 Tomcats. I try to disable all Tomcats nodes at once smth. like: curl -G -d "cmd=update&mime=prop&w=router&att=vwa&val0=1&val1=1" localhost/jkmanager/ And expect returned HTTP code 503 for multiple requests. In web browsers (Chrome, Firefox) it is behave strange. During the refreshes I sometimes see served response from Tomcat. This strange (random) behaviour is valid for running wget muliple times too. wget --no-cache -O - 127.0.0.1/testapp After that I want to start all nodes, but it is not working. Nothing happend and behaviour remains same as I described above. curl -G -d "cmd=update&mime=prop&w=router&att=vwa&val0=0&val1=0" localhost/jkmanager/ Btw. shared memory file (JkShmFile) is on local disk (no NFS). See my configurations and logs in the attached files.