Bug 61090 - mod_proxy gives 502 on early HTTP response (3xx, 4xx, 5xx)
Summary: mod_proxy gives 502 on early HTTP response (3xx, 4xx, 5xx)
Status: NEW
Alias: None
Product: Apache httpd-2
Classification: Unclassified
Component: mod_proxy_http (show other bugs)
Version: 2.4.25
Hardware: PC Mac OS X 10.1
: P2 normal (vote)
Target Milestone: ---
Assignee: Apache HTTPD Bugs Mailing List
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2017-05-12 21:30 UTC by felipe
Modified: 2020-06-16 08:58 UTC (History)
1 user (show)



Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description felipe 2017-05-12 21:30:48 UTC
I’ve got an upstream application that sits behind mod_proxy. When a client POSTs a file upload via a form, that gets proxied to my application: mod_proxy appears to spool part of the file upload, then it starts sending..

The problem is that my application occasionally sends back a 307 or 308. My application, once it receives the headers and determines that it needs to redirect, sends the 307 response then close()s its side of the connection.

From what I can see in the strace output between Apache and my application, though, Apache never poll()s to see if the upstream endpoint has sent a response. It just keeps sending the file upload, which eventually gives back ECONNRESET, in response to which I get errors in my Apache log, and the client gets a nasty 502 error.

It seems like Apache should be poll()ing for a read event on the upstream connection to detect whether it should stop proxying and instead forward on the response?
Comment 1 felipe 2017-05-15 16:34:17 UTC
This actually happens on any early HTTP response, not just on 307s.
Comment 2 felipe 2017-05-15 16:48:31 UTC
Note that RFC 2616/8.2.4 specifies that clients SHOULD implement behavior such as this task describes:

-------------
 If at any point an error status is received, the client

      - SHOULD NOT continue and

      - SHOULD close the connection if it has not completed sending the
        request message.
Comment 3 Eric Covener 2017-05-15 17:10:21 UTC
(In reply to felipe from comment #2)
> Note that RFC 2616/8.2.4 specifies that clients SHOULD implement behavior
> such as this task describes:
> 
> -------------
>  If at any point an error status is received, the client
> 
>       - SHOULD NOT continue and
> 
>       - SHOULD close the connection if it has not completed sending the
>         request message.

I think the PR is a useful requirement, but AFAICT this guidance above was not carried forward to httpbis (and even in 2616 there is no requirement to poll for a status code mid-request, so we cannot say it has been received).

See 7230 6.6 for expectations on how connections are safely torn down.
Comment 4 felipe 2017-05-15 18:41:41 UTC
Whether the upstream application does a “staged” shutdown as 7230/6.6 envisions or does an abrupt close(), the change in mod_proxy would be the same, wouldn’t it?

Maybe mod_proxy wouldn’t necessarily have to poll() for a read event unless ECONNRESET is received? I’m not sure of the performance implications of poll()ing for 2 events regularly rather than just one, but I’d think that would be the simplest way to handle the situation.
Comment 5 Eric Covener 2017-05-15 18:51:10 UTC
(In reply to felipe from comment #4)
> Whether the upstream application does a “staged” shutdown as 7230/6.6
> envisions or does an abrupt close(), the change in mod_proxy would be the
> same, wouldn’t it?

I don't think so, my assumption is that if the server had read the body, there would be no RST and the response would be read and returned as if it was a function of the body (as in a normal flow)

> 
> Maybe mod_proxy wouldn’t necessarily have to poll() for a read event unless
> ECONNRESET is received? I’m not sure of the performance implications of
> poll()ing for 2 events regularly rather than just one, but I’d think that
> would be the simplest way to handle the situation.

Seems like there is/was a risk in waiting for the write failure, the unread response might not still be given back to mod_proxy if the stack has seen the RST.
Comment 6 felipe 2017-05-15 21:12:54 UTC
(In reply to Eric Covener from comment #5)
> (In reply to felipe from comment #4)
> > Whether the upstream application does a “staged” shutdown as 7230/6.6
> > envisions or does an abrupt close(), the change in mod_proxy would be the
> > same, wouldn’t it?
> 
> I don't think so, my assumption is that if the server had read the body,
> there would be no RST and the response would be read and returned as if it
> was a function of the body (as in a normal flow)

Having the server read the (entire) POST body is wasteful, though: the client is “waiting for nothing”.

> 
> > 
> > Maybe mod_proxy wouldn’t necessarily have to poll() for a read event unless
> > ECONNRESET is received? I’m not sure of the performance implications of
> > poll()ing for 2 events regularly rather than just one, but I’d think that
> > would be the simplest way to handle the situation.
> 
> Seems like there is/was a risk in waiting for the write failure, the unread
> response might not still be given back to mod_proxy if the stack has seen
> the RST.

It’s definitely a less clean approach, true.
Comment 7 Michael Osipov 2020-05-20 19:29:56 UTC
Here is another in-detail description of the issue: https://www.mail-archive.com/users@tomcat.apache.org/msg135207.html
Comment 8 Yann Ylavic 2020-05-20 20:19:48 UTC
Can't the ProxyPass ping= parameter help here, such that mod_proxy_http issues a 100-continue negotiation with the backend (even if the client did not ask for it)?

In latest mod_proxy versions, if the response to the 100-continue negotiation is not a "100 continue" status from the backend, then the response is forwarded directly to the client (the request body is never sent to the backend).
Comment 9 Michael Osipov 2020-05-20 21:31:54 UTC
(In reply to Yann Ylavic from comment #8)
> Can't the ProxyPass ping= parameter help here, such that mod_proxy_http
> issues a 100-continue negotiation with the backend (even if the client did
> not ask for it)?
> 
> In latest mod_proxy versions, if the response to the 100-continue
> negotiation is not a "100 continue" status from the backend, then the
> response is forwarded directly to the client (the request body is never sent
> to the backend).

In my case, to some extend. I have tried with 100 ms and 500 ms. The backend is on the same machine. Sometimes it works, sometimes curl receives 503. But adding a fixed overhead to each an every request isn't a real solution because the backend (Tomcat) does the right thing. mod_proxy_http should probably react on the immediate close.
Comment 10 Yann Ylavic 2020-05-20 21:52:57 UTC
(In reply to Michael Osipov from comment #9)
> In my case, to some extend. I have tried with 100 ms and 500 ms. The backend
> is on the same machine. Sometimes it works, sometimes curl receives 503.

That's a way too short timeout IMHO, you should use the same value as ProxyTimeout (or timeout= parameter if defined). A short timeout is only useful if you want to fail over another balancer member, but in case where there is a single worker it's no different than the usual proxy<=>backend timeout.

> But
> adding a fixed overhead to each an every request isn't a real solution
> because the backend (Tomcat) does the right thing. mod_proxy_http should
> probably react on the immediate close.

Which overhead? There is no round-trip here unless the client as requests a 100-continue, and in case of final response issued directly by the backend there is no overhead at all.
Comment 11 Yann Ylavic 2020-05-20 21:54:26 UTC
(In reply to Yann Ylavic from comment #10)
> ... unless the client as requests a ...
... unless the client *also* requests a ...
Comment 12 Yann Ylavic 2020-05-20 22:08:20 UTC
Also, I think that a backend should not allow itself to respond early and close the connection (without reading the whole request) unless 100-continue is involved somehow. HTTP being transactional, a proxy can legitimately (IMHO) respond with an error if it could not send the request completely..
Comment 13 Michael Osipov 2020-05-21 16:05:14 UTC
First of all, the case here applies only to clients which do not provide "Expect: 100-continue", e.g., py-requests.

(In reply to Yann Ylavic from comment #12)
> Also, I think that a backend should not allow itself to respond early and
> close the connection (without reading the whole request) unless 100-continue
> is involved somehow. HTTP being transactional, a proxy can legitimately
> (IMHO) respond with an error if it could not send the request completely..

Why actually? Tomcat does this and decent clients are able to handle it (e.g., curl). Consider that I am uploading a huge payload and the server knows after a few hundred bytes that it will refuse to process the request anyway. Why waste time and bandwidth to reject the request after the upload?

I'd like to cite RFC 7230 which covers this case:

First:
> 6.3.1.  Retrying Requests
> 
>    Connections can be closed at any time, with or without intention.
>    Implementations ought to anticipate the need to recover from
>    asynchronous close events.
>    When an inbound connection is closed prematurely, a client MAY open a
>    new connection and automatically retransmit an aborted sequence of
>    requests if all of those requests have idempotent methods (Section
>    4.2.2 of [RFC7231]).  A proxy MUST NOT automatically retry
>    non-idempotent requests.

Second:
> 6.6.  Tear-down
> 
>    A server that sends a "close" connection option MUST initiate a close
>    of the connection (see below) after it sends the response containing
>    "close".  The server MUST NOT process any further requests received
>    on that connection.
> 
>    A client that receives a "close" connection option MUST cease sending
>    requests on that connection and close the connection after reading
>    the response message containing the "close"; if additional pipelined
>    requests had been sent on the connection, the client SHOULD NOT
>    assume that they will be processed by the server.

This is what is happening here. Tomcat closes the connection due to the 401, but HTTPd still insists on sending the request body.

So does curl:
> $ curl -X POST  --anyauth -u : --upload-file inputs.zip -H "Content-Type: application/zip" https://<hostname>:11111/content-dev/api/documents --verbose -H "Expect:"
> * Uses proxy env variable NO_PROXY == 'localhost .siemens.net .siemens.com .siemens.de'
> *   Trying <ip>:11111...
> * Connected to <hostname> (<ip>) port 11111 (#0)
> * ALPN, offering h2
> * ALPN, offering http/1.1
> * successfully set certificate verify locations:
> *   CAfile: none
>   CApath: /etc/ssl/certs/
> * TLSv1.3 (OUT), TLS handshake, Client hello (1):
> * TLSv1.3 (IN), TLS handshake, Server hello (2):
> * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
> * TLSv1.3 (IN), TLS handshake, Certificate (11):
> * TLSv1.3 (IN), TLS handshake, CERT verify (15):
> * TLSv1.3 (IN), TLS handshake, Finished (20):
> * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
> * TLSv1.3 (OUT), TLS handshake, Finished (20):
> * SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
> * ALPN, server did not agree to a protocol
> * Server certificate:
> *  subject: C=DE; O=Siemens; OU=LDA DW; CN=<hostname>
> *  start date: Mar 12 09:50:16 2020 GMT
> *  expire date: Mar 19 13:10:13 2021 GMT
> *  subjectAltName: host "<hostname>" matched cert's "<hostname>"
> *  issuer: C=DE; ST=Bayern; L=Muenchen; O=Siemens; serialNumber=ZZZZZZB7; OU=Siemens Trust Center; CN=Siemens Issuing CA Intranet Server 2017
> *  SSL certificate verify ok.
> > POST /content-dev/api/documents HTTP/1.1
> > Host: <hostname>:11111
> > User-Agent: curl/7.70.0
> > Accept: */*
> > Content-Type: application/zip
> > Content-Length: 7664149
> >
> * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
> * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
> * old SSL session ID is stale, removing
> * Mark bundle as not supporting multiuse
> < HTTP/1.1 401
> < WWW-Authenticate: Negotiate
> < Content-Type: text/html;charset=utf-8
> < Content-Language: en
> < Content-Length: 437
> < Date: Wed, 20 May 2020 08:28:16 GMT
> * NEGOTIATE send, close instead of sending 7598613 bytes
> <
> * Excess found: excess = 437 url = /content-dev/api/documents (zero-length body)
> * Closing connection 0
> * Issue another request to this URL: 'https://<hostname>:11111/content-dev/api/documents'

Especially:
> * NEGOTIATE send, close instead of sending 7598613 bytes

I will try to match ping with ProxyTimeout and see how far I get.
Comment 14 Michael Osipov 2020-05-21 16:35:20 UTC
I have just tried to set ping to the value of ProxyTimeout. It does indeed work, but with a very high price:

> osipovmi@deblndw011x:~
> $ time curl -X POST  --anyauth -u : --upload-file inputs.zip -H "Content-Type: application/zip" https://<hostname>:11111/content-dev/api/documents --verbose -H "Expect:"  --trace trace.direct
> 
> real    0m6,443s
> user    0m5,180s
> sys     0m0,394s
> osipovmi@deblndw011x:~
> $ time curl -X POST  --anyauth -u : --upload-file inputs.zip -H "Content-Type: application/zip" https://<hostname>/content-dev/api/documents --verbose -H "Expect:"  --trace trace.proxy
> 
> real    0m11,485s
> user    0m9,527s
> sys     0m0,648s

When I look at the trace files it is clear that in the direct case curl sends the first block of 65536 bytes and then handles the close properly which in the proxy case the entire body is sent twice.

I can provide you both trace files if you like.
Comment 15 Yann Ylavic 2020-05-21 17:06:18 UTC
(In reply to Michael Osipov from comment #14)
> 
> When I look at the trace files it is clear that in the direct case curl
> sends the first block of 65536 bytes and then handles the close properly
> which in the proxy case the entire body is sent twice.

Sent twice by curl to the proxy? The proxy to the backend?
I don't see where this "twice" comes from.

> 
> I can provide you both trace files if you like.

Yes please, with the corresponding httpd log traces (LogLevel trace5) and possibly the dumpio module enabled with this configuration snippet:
    LoadModule dumpio_module modules/mod_dumpio.so
    DumpIOInput on
    DumpIOoutput on
Comment 16 Yann Ylavic 2020-05-21 18:06:07 UTC
(In reply to Michael Osipov from comment #13)
> First of all, the case here applies only to clients which do not provide
> "Expect: 100-continue", e.g., py-requests.
OK, so the ping= parameter allows 100-continue between mod_proxy and the backend only.

> 
> > Also, I think that a backend should not allow itself to respond early and
> > close the connection (without reading the whole request) unless 100-continue
> > is involved somehow. ...
> 
> Why actually? Tomcat does this and decent clients are able to handle it
> (e.g., curl). Consider that I am uploading a huge payload and the server
> knows after a few hundred bytes that it will refuse to process the request
> anyway. Why waste time and bandwidth to reject the request after the upload?
A proxy is not a client, there are cases where whatever the proxy does it can not recover from this situation. See below.

> 
> I'd like to cite RFC 7230 which covers this case:
Not really..

> 
> First:
> > 6.3.1.  Retrying Requests
> > 
> >    Connections can be closed at any time, with or without intention.
> >    Implementations ought to anticipate the need to recover from
> >    asynchronous close events.
> >    When an inbound connection is closed prematurely, a client MAY open a
> >    new connection and automatically retransmit an aborted sequence of
> >    requests if all of those requests have idempotent methods (Section
> >    4.2.2 of [RFC7231]).  A proxy MUST NOT automatically retry
> >    non-idempotent requests.
So it's clearly a different situation for a CLIENT and a proxy above, right?
But we are not talking about retrying requests here anyway.

> 
> Second:
> > 6.6.  Tear-down
> > 
> >    A server that sends a "close" connection option MUST initiate a close
> >    of the connection (see below) after it sends the response containing
> >    "close".  The server MUST NOT process any further requests received
> >    on that connection.
The above says nothing about when the SERVER should close the TCP connection after sending a "Connection: close" response, and particularly nothing about _early_ "close" (read "Connection: close" when "close" is between double quotes).

> > 
> >    A client that receives a "close" connection option MUST cease sending
> >    requests on that connection and close the connection after reading
> >    the response message containing the "close"; if additional pipelined
> >    requests had been sent on the connection, the client SHOULD NOT
> >    assume that they will be processed by the server.
That's not the same situation really, all this talks about is what a CLIENT/SERVER should do after a "Connection: close" request/response has completed.

> 
> This is what is happening here. Tomcat closes the connection due to the 401,
> but HTTPd still insists on sending the request body.

What happens if Tomcat closes the connection as soon as it sends an early response it that there is no way for the proxy to reliably read the early response. Anything sent to Tomcat can reset the connection at the TCP level, at which point nothing can be read, including the eraly response.

Suppose the proxy reads the client header and the first 16K of the body and forwards that to Tomcat, nothing garantees that Tomcat will get that header+16K in one piece. Suppose that Tomcat reads the header + 8K only, responds with 401 and closes the connection. The remaining 8K (in flight) will be reset by its TCP stack and now the connection is over for everyone.

What can the proxy do to make this reliable?
Send the header only, try to read the response with some timeout, forward the response if any, otherwise send the body.
Sounds like an 100-continue right? Less efficient though because of the arbitrary timeout (avoided by 100 continue ack..).
Comment 17 Yann Ylavic 2020-05-21 20:05:37 UTC
I think the below are the relevent bits from https://tools.ietf.org/html/rfc7230#section-6.6:

   If a server performs an immediate close of a TCP connection, there is
   a significant risk that the client will not be able to read the last
   HTTP response.  If the server receives additional data from the
   client on a fully closed connection, such as another request that was
   sent by the client before receiving the server's response, the
   server's TCP stack will send a reset packet to the client;
   unfortunately, the reset packet might erase the client's
   unacknowledged input buffers before they can be read and interpreted
   by the client's HTTP parser.

   To avoid the TCP reset problem, servers typically close a connection
   in stages.  First, the server performs a half-close by closing only
   the write side of the read/write connection.  The server then
   continues to read from the connection until it receives a
   corresponding close by the client, or until the server is reasonably
   certain that its own TCP stack has received the client's
   acknowledgement of the packet(s) containing the server's last
   response.  Finally, the server fully closes the connection.
Comment 18 Michael Osipov 2020-05-21 20:51:39 UTC
(In reply to Yann Ylavic from comment #15)
> (In reply to Michael Osipov from comment #14)
> > 
> > When I look at the trace files it is clear that in the direct case curl
> > sends the first block of 65536 bytes and then handles the close properly
> > which in the proxy case the entire body is sent twice.
> 
> Sent twice by curl to the proxy? The proxy to the backend?
> I don't see where this "twice" comes from.

Twice to the proxy, but did not very from proxy to backend.

> > I can provide you both trace files if you like.
> 
> Yes please, with the corresponding httpd log traces (LogLevel trace5) and
> possibly the dumpio module enabled with this configuration snippet:
>     LoadModule dumpio_module modules/mod_dumpio.so
>     DumpIOInput on
>     DumpIOoutput on

Sure, no problem will try to produce them tomorrow. Do you also also want the trace output from Tomcat too?
Comment 19 Michael Osipov 2020-05-21 21:21:33 UTC
(In reply to Yann Ylavic from comment #16)
> (In reply to Michael Osipov from comment #13)
> > I'd like to cite RFC 7230 which covers this case:
> Not really..
> 
> > 
> > First:
> > > 6.3.1.  Retrying Requests
> > > 
> > >    Connections can be closed at any time, with or without intention.
> > >    Implementations ought to anticipate the need to recover from
> > >    asynchronous close events.
> > >    When an inbound connection is closed prematurely, a client MAY open a
> > >    new connection and automatically retransmit an aborted sequence of
> > >    requests if all of those requests have idempotent methods (Section
> > >    4.2.2 of [RFC7231]).  A proxy MUST NOT automatically retry
> > >    non-idempotent requests.
> So it's clearly a different situation for a CLIENT and a proxy above, right?
> But we are not talking about retrying requests here anyway.

Agreed, I would not expect a proxy to retry the request, but pass on the response from the backend server, at best.

> > Second:
> > > 6.6.  Tear-down
> > > 
> > >    A server that sends a "close" connection option MUST initiate a close
> > >    of the connection (see below) after it sends the response containing
> > >    "close".  The server MUST NOT process any further requests received
> > >    on that connection.
> The above says nothing about when the SERVER should close the TCP connection
> after sending a "Connection: close" response, and particularly nothing about
> _early_ "close" (read "Connection: close" when "close" is between double
> quotes).
> 
> > > 
> > >    A client that receives a "close" connection option MUST cease sending
> > >    requests on that connection and close the connection after reading
> > >    the response message containing the "close"; if additional pipelined
> > >    requests had been sent on the connection, the client SHOULD NOT
> > >    assume that they will be processed by the server.
> That's not the same situation really, all this talks about is what a
> CLIENT/SERVER should do after a "Connection: close" request/response has
> completed.

I see the point. Client and proxy are two distinct entities/participants in RFC 7230. But from client's perspective, it shouldn't make a difference whether it talks to a proxy or to the backend server directly, should it?

> > This is what is happening here. Tomcat closes the connection due to the 401,
> > but HTTPd still insists on sending the request body.
> 
> What happens if Tomcat closes the connection as soon as it sends an early
> response it that there is no way for the proxy to reliably read the early
> response. Anything sent to Tomcat can reset the connection at the TCP level,
> at which point nothing can be read, including the eraly response.
> 
> Suppose the proxy reads the client header and the first 16K of the body and
> forwards that to Tomcat, nothing garantees that Tomcat will get that
> header+16K in one piece. Suppose that Tomcat reads the header + 8K only,
> responds with 401 and closes the connection. The remaining 8K (in flight)
> will be reset by its TCP stack and now the connection is over for everyone.
> 
> What can the proxy do to make this reliable?
> Send the header only, try to read the response with some timeout, forward
> the response if any, otherwise send the body.
> Sounds like an 100-continue right? Less efficient though because of the
> arbitrary timeout (avoided by 100 continue ack..).

That's a tough question. Hard to answer, there may be even several answers. I cannot tell. I could live with the ping approach if the proxy cannot salvage the situation, but double upload is really not ideal.
I still wonder how curl does that.
Comment 20 Michael Osipov 2020-05-22 16:18:52 UTC
I could now recreate the failure from mod_proxy_http with Apache HttpClient 5.0. I will get back to you within a few days and will investigate with both, Tomcat and HttpClient devs.
Comment 21 Michael Osipov 2020-06-16 08:58:27 UTC
Yann, I think I know how this can be solved (logically). I have talked to curl, Tomcat and HttpComponents fellow committers.

* curl: It uses non-blocking I/O to detect early responses and fills a struct with that information. See https://curl.haxx.se/mail/archive-2020-05/0054.html
* Tomcat behaves correctly, Mark has also added another improvement to send out "Connection: close" on an early response because the connection is closed. See https://github.com/apache/tomcat/commit/3c304cc0da9dd413ac4df289299e9ff04c670b17.
* HttpComponents: We have been tried back and forth with Oleg for the blocking client since I assume that mod_proxy_http uses blocking I/O too, and we have come up with a decent solution. Discussion: https://www.mail-archive.com/httpclient-users@hc.apache.org/msg09911.html, patch: https://github.com/apache/httpcomponents-core/commit/6637591213221b21a5a473e7092fa2e65d69715c

What we basically do is after headers are sent and after every 8 KiB chunk of request body data we probe whether data is available from the server. If yes, we mark the request as aborted/inconsistent, stop sending and close the connection. The discussion also contains extensive logging output.

I have no idea how to implement this in mod_proxy_http. What do you think, is this feasible with the the proxy module and APR?