In a reverse proxy scenario, if the client closes the connection after sending the request headers, the backend might still run into a timeout, although its connection could be closed immediately. Actual scenario/how to reproduce: * Using a simple netcat reverse proxy backend (balancer) * Setting Timeout/ProxyTimeout to X * Client sends some arbitrary GET request and closes the connection * Backend either: * does nothing, resulting after X seconds in: (70007)The timeout specified has expired: [client 192.168.150.2:48056] AH01102: error reading status line from remote server * sends "200 OK" with some Content-Length (without or with less body), resulting after X seconds in: (70007)The timeout specified has expired: [client 192.168.150.2:48094] AH01110: error reading response Why this might be bad in my scenario: For some backends (such as in my case Microsoft-Server-ActiveSync), a high timeout might be desirable/required. For me, these are some kind of long-polling requests with the response having a huge Content-Length (that won't be reached), just in case some data has to be "pushed". Now, as described above, for each client connection that gets closed, a backend-connection might be idling until its (huge) timeout gets reached. Imho these connections should be closed (as it is pointless to read) and impose a kind of DoS risk. I would expect to "propagate" the hard connection-close after seeing a close event from the client connection. Or am I mistaken and this is a desired behaviour (maybe for connection pooling purposes)? If so, could it be overridden by means of configuration/patch?
This looks like a duplicate of bug 54526