Bug 29962 - byterange filter buffers response in memory
byterange filter buffers response in memory
Status: CLOSED FIXED
Product: Apache httpd-2
Classification: Unclassified
Component: Core
2.0.54
PC Windows XP
: P3 major with 3 votes (vote)
: ---
Assigned To: Apache HTTPD Bugs Mailing List
:
Depends on:
Blocks:
  Show dependency tree
 
Reported: 2004-07-07 21:39 UTC by Filip Sneppe
Modified: 2005-08-26 05:31 UTC (History)
1 user (show)



Attachments
Byterange patch for Apache 2.0.x (2.87 KB, patch)
2005-08-18 21:29 UTC, dswhite42
Details | Diff

Note You need to log in before you can comment on or make changes to this bug.
Description Filip Sneppe 2004-07-07 21:39:29 UTC
This bug may be related to 28175, although I am on a different platform
with a slightly different setup, and I can reproduce the problem, so I am 
opening a new bugreport for this.

For many months, using different versions of Apache 2.0.x from
Debian testing, we have been experiencing what can only be described as
serious memory leaks when using mod_proxy and/or mod_rewrite to reverse 
proxy backend web servers. I am using MPM prefork.

I have been able to troubleshoot this problem using network captures etc. on 
our production machines and I believe I can reproduce this problem on
a test system.

To reproduce this, I used an apache 1.3.29 on port 3000 on the localhost
used as the backend server. In our live environment, this is on a
different machine.
Apache 2 listens on port 80 and is used as a reverse proxy for the
server on port 3000.

For the Apache1.3, I have /usr/lib/cgi-bin/do-post:

#!/usr/bin/perl
print "Content-type: text/plain\n\n";
for ($t=0; $t<400000; ++$t) {
        print "o"x99; print "\n"
}

Which essentially returns about 40Mb of data.

For the Apache2 config, I have the following virtual host:

<VirtualHost *>
        ServerName www.test.local
        RewriteEngine On
        ProxyRequests On
        ProxyPreserveHost Off
        RewriteCond %{HTTP_HOST}      ^www\.test\.local$
        RewriteRule ^/(.*)            http://localhost:3000/$1     [P,L]
</VirtualHost>

Now, the following type of request:

sneppef@xbox:~$ telnet localhost 80 > /tmp/t
POST /cgi-bin/do-post HTTP/1.1
Host: www.test.local
Range: bytes=20000000-35000000

suddenly bumps the memory use of an apache child process, in this
case to about 80Mb on my system, and this doesn't seem to get freed 
afterwards.

This is a close as I can simulate this on a test environment at this
moment. Things are even more dramatic on our production systems.
They have 1Gb of RAM, and the following virtualhost setting:

<VirtualHost *>
        ServerName www.xxxxxx.be
        RewriteEngine On
        ProxyRequests On
        ProxyPreserveHost Off
        CustomLog /var/log/apache2/www.caffo.be.log "full"
        RewriteCond %{HTTP_HOST}      ^www\.xxxxxx\.be$
        RewriteRule ^/(.*)            http://a.b.c.88/$1     [P,L]
</VirtualHost>

(I mangled the hostname and IP address in the above!)

combined with the following request (this is straight from a network
capture):

POST /xxxxxx/demodownload/download/Converter6_Trial_Setup.exe HTTP/1.0
Via: 1.1 RGOPROXY
Content-Length: 45
Content-Type: application/x-www-form-urlencoded
User-Agent: Mozilla/4.0 (compatible; MSIE 5.00; Windows 98)
Host: www.xxxxxx.be
Range: bytes=19092309-38183432
Accept: */*
Referer: http://www.xxxxxx.be/xxxxxx/demodownload/demodownload.downloadpage
Cache-Control: max-stale=0
Connection: Keep-Alive
X-BlueCoat-Via: 175D569BDF449936

custid=2813&p_type=CONVDEMO&download=Download

(Again, the xxxxxx are mangled)

Bumps the memory use of an Apache2 process up to about 150Mb. Obviously a 
limited number of these kinds of requests render our reverse proxy
unusable.

The file in question is 38Mb large. The backend servers returns this
(from tcpdump -X):

0x0030   0782 c70e 4854 5450 2f31 2e31 2032 3030        ....HTTP/1.1.200
0x0040   204f 4b0d 0a44 6174 653a 2057 6564 2c20        .OK..Date:.Wed,.
0x0050   3037 204a 756c 2032 3030 3420 3231 3a33        07.Jul.2004.21:3
0x0060   313a 3530 2047 4d54 0d0a 5365 7276 6572        1:50.GMT..Server
0x0070   3a20 4f72 6163 6c65 2048 5454 5020 5365        :.Oracle.HTTP.Se
0x0080   7276 6572 2050 6f77 6572 6564 2062 7920        rver.Powered.by.
0x0090   4170 6163 6865 2f31 2e33 2e31 3220 2857        Apache/1.3.12.(W
0x00a0   696e 3332 2920 4170 6163 6865 4a53 6572        in32).ApacheJSer
0x00b0   762f 312e 3120 6d6f 645f 7373 6c2f 322e        v/1.1.mod_ssl/2.
0x00c0   362e 3420 4f70 656e 5353 4c2f 302e 392e        6.4.OpenSSL/0.9.
0x00d0   3561 206d 6f64 5f70 6572 6c2f 312e 3234        5a.mod_perl/1.24
0x00e0   0d0a 436f 6e74 656e 742d 4c65 6e67 7468        ..Content-Length
0x00f0   3a20 3338 3138 3334 3333 0d0a 436f 6e74        :.38183433..Cont
0x0100   656e 742d 5479 7065 3a20 6170 706c 6963        ent-Type:.applic
0x0110   6174 696f 6e2f 6f63 7465 742d 7374 7265        ation/octet-stre
0x0120   616d 0d0a 0d0a 4d5a 9000 0300 0000 0400        am....MZ........

Followed by the complete file (38Mb)

apache2 returns only the requested byte range:

HTTP/1.1 200 OK
Date: Wed, 07 Jul 2004 21:28:14 GMT
Server: Oracle HTTP Server Powered by Apache/1.3.12 (Win32) ApacheJServ/1.1
mod_ssl/2.6.4 OpenSSL/0.9.5a mod_pe
rl/1.24
Content-Length: 19091124
Content-Type: application/octet-stream
Content-Range: bytes 19092309-38183432/38183433
Connection: close

...data...

Surely if the backend server only returns about 38Mb of data and
Apache2 child process shouldn't consume 150Mb :-/

I hope this helps... If you need any other info, let me know,
as I have been able to reproduce this every time. I have tagged
this as a major issue, since bug 23567 seems to be considered
critical. I hope I am not exagerating, as this is my first
bugreport on Apache. Do keep up the excellent work!

Regards
Comment 1 Joe Orton 2004-07-08 12:52:19 UTC
Yes, this is a problem with the byterange filter in 2.0, it will buffer the
entire response in memory.
Comment 2 Filip Sneppe 2004-07-26 14:33:49 UTC
I am just wondering if there is currently *any* workaround for this ?
A directive that disables byterange support ?
Because, isn't this a serious security issue in itself ? It means
any user that can send http requests to an apache proxy can DoS it
by sending even a limited number of specially crafted requests that
download some large files somewhere...
Comment 3 André Malo 2004-07-26 15:40:28 UTC
You can DoS any HTTP server very easy. One could say, that's part of the
protocol ;-)

Anyway,

RequestHeader unset Range

or somehting like this should work for you.
Comment 4 Nick Kew 2004-07-26 16:59:43 UTC
Don't forget

Header unset Accept-Ranges

so the server isn't telling porkies about its capabilities
Comment 5 Tyler 2004-09-28 19:52:41 UTC
I appear to be experiencing a similar problem when my users submit files to the
server.  I've noticed that a submitted assignment (POST), which has an attached
word document, appears to cause one of the apache processes to inflate to
between 300 - 750 megabytes in size.  This does not appear to be equal to the
size of the attachment.
Comment 6 Tyler 2004-10-05 19:37:53 UTC
I hunted down the problem, it was an error in a purchased PHP script that loaded
the entire contents of our of our db tables into memory (which is now
approaching the 800 megabytes threshhold).  However, when the script terminated
that memory was not being released by Apache 2.0.51.
Comment 7 Joe Orton 2005-06-14 22:41:04 UTC
The byterange filter memory consumption issue is now fixed for 2.1.5.

http://svn.apache.org/viewcvs?rev=188797&view=rev
Comment 8 dswhite42 2005-08-18 21:29:44 UTC
Created attachment 16102 [details]
Byterange patch for Apache 2.0.x

I hope Joe won't mind if I post a version of the patch which he modified to
work with the Apache 2.0.x branch.  Thanks, Joe!
Comment 9 Joe Orton 2005-08-23 10:36:47 UTC
Now merged for 2.0.55.  Thanks for the report.
Comment 10 Brady Bowen 2005-08-26 13:31:12 UTC
I'm not sure that this has been fixed, I've downloaded the patch and have it
applied yet, it still runs amuck.  I checked on my server this morning, and
there sat a process in apache holding onto 635MB of data.  Maybe this bug is
occuring somewhere else.  I am in no way capable of tracking that down, but I do
know it's still occuring.