Summary: | Apache Web Server 2.2.11 Incomplete HTTP Header Resource Exhaustion Vulnerability | ||
---|---|---|---|
Product: | Apache httpd-2 | Reporter: | sailesh_kyanam |
Component: | Core | Assignee: | Apache HTTPD Bugs Mailing List <bugs> |
Status: | RESOLVED INVALID | ||
Severity: | major | CC: | a3li |
Priority: | P2 | ||
Version: | 2.2.11 | ||
Target Milestone: | --- | ||
Hardware: | All | ||
OS: | All | ||
URL: | http://isc.sans.org/diary.html?storyid=6601 |
Description
sailesh_kyanam
2009-06-24 09:30:05 UTC
This is by design; see LimitRequest* directives for mitigation, especially; http://httpd.apache.org/docs/2.2/mod/core.html#limitrequestfields The httpd group is reviewing alternatives for timeout processing, but is already well aware of similar complaints. In the interim, see iptables and similar firewall tools and appliances to restrict abusive behavior patterns at the IP and TCP layers, and LimitRequestFields etc to control the number of headers expected by your specific environment. Will, with all due respect, I don't think the fact we're aware of it (and in the wake of slowloris everyone is discussing it) invalidates a bug report applied to current versions. In the short term, we need to publish something on mitigation. We have yet to do even that! Nick; this particular report describes the problem that arbitrary headers of some arbitrary number (limit 100 by default) are accepted individually by httpd. That is not a bug. Reclosing. An appropriate bug report w.r.t. timeouts would be entirely appropriate, and I'm sure this reporter would appreciate being cc'ed on that particular case. What is described here is absolutely not a bug. Thanks for your feedback and insights. Whether we call this a bug, feature or known issue - I was able to very trivially bring down numerous Apache web servers using a modified version of this script. I could "workaround" the issue by reducing timeout to very low numbers (which are always not acceptable in our situation) and/or limit the headers to unreasonably small numbers (no idea what affect this would have on some of our more complex apps). The only realistic option I found to work around this issue is to allocate a large number of processes and assign a large number of threads to each process (I use mpm_worker), and then hope the the script kiddid attacking me is not a persistent *gentleman*. Ofcourse, there are other options such as using firewalls and IDS - both of which are not practical in many of our use cases. |