Bug 39243 - Can't post files larger than 128k onto ssl client cert secured site
Summary: Can't post files larger than 128k onto ssl client cert secured site
Status: RESOLVED FIXED
Alias: None
Product: Apache httpd-2
Classification: Unclassified
Component: mod_ssl (show other bugs)
Version: 2.0.55
Hardware: PC Linux
: P2 normal with 9 votes (vote)
Target Milestone: ---
Assignee: Apache HTTPD Bugs Mailing List
URL:
Keywords:
Depends on:
Blocks: 46508
  Show dependency tree
 
Reported: 2006-04-07 17:31 UTC by Markus Bertheau
Modified: 2014-02-17 13:59 UTC (History)
7 users (show)



Attachments
httpd-2.2.8-ssl-io-buffer.patch (3.74 KB, patch)
2008-02-05 13:08 UTC, Krzysiek Pawlik
Details | Diff

Note You need to log in before you can comment on or make changes to this bug.
Description Markus Bertheau 2006-04-07 17:31:57 UTC
I can't post files to an ssl site that are larger than 128k (120k works, 140k
doesn't work. The error message I get in ssl_error_log is

request body exceeds maximum size for SSL buffer
could not buffer message body to allow SSL renegotiation to proceed

and the client gets 413. This doesn't occur every time; apparently only when ssl
renegotiation is needed.
Comment 1 Ruediger Pluem 2006-04-07 23:12:59 UTC

*** This bug has been marked as a duplicate of 12355 ***
Comment 2 Markus Bertheau 2006-04-08 10:28:58 UTC
The fix to bug 12355 specificially lead to this bug. This is not a duplicate.
Comment 3 Ruediger Pluem 2006-04-08 11:00:55 UTC
Ok, technically you are right and your report is not exactly a duplicate of
12355, but it is currently not planned to change this behaviour in the case that
you have POST requests + SSL + Directory or Location based client certificates
which require a SSL renegotiation. It would require to introduce disk buffering
of the POST request body. If you are using client certificates for the whole
virtual host everything works fine. So I mark it a WONTFIX.
Comment 4 Peter Wagemans 2006-06-28 15:15:25 UTC
There may be good functional reasons for POSTs larger than 128k and to
require client certificates only for access to certain URLs. And
asking for an optional client certificate at SSL connect bothers users
of other URLs with unnecessary prompts for client certificates that
they may not have and don't need (depending on the browser that they
use).

For us the hard-coded limit is still a problem.

> It would require to introduce disk buffering of the POST request
> body.

This is not clear to me. Where is the hard-coded limit of 128k coming
from?

Could the code not look at the value of the directive LimitRequestBody
and if it is set allow SSL request bodies of that size?
Comment 5 Doncho N. Gunchev 2006-07-07 09:25:13 UTC
I really need at least 200K limit. If I understand correctly, I can 'patch' 
the code and increase this buffer from 128K to say 256K, recompile apache and 
it will work, right?
Comment 6 Ruediger Pluem 2006-07-07 15:50:49 UTC
(In reply to comment #5)
> I really need at least 200K limit. If I understand correctly, I can 'patch' 
> the code and increase this buffer from 128K to say 256K, recompile apache and 
> it will work, right?

Correct. You can do this.
Comment 7 Joe Orton 2006-07-11 14:42:21 UTC
Well, the default could be bumped to 256K, that wouldn't be unreasonable.

But you should really design your site to ensure that the first request to a
client-cert-protected area is not a POST request with a large body; make it a
GET or something.  Any request body has to be buffered into RAM to handle this
case, so represents an opportunity to DoS the server.

To bump the limit you can build like:
 
   ./configure CPPFLAGS=-DSSL_MAX_IO_BUFFER=256000

Anybody for whom 128K is too small but 256K would be sufficient, please add a
comment here, to gauge interest in making that change.
Comment 8 powell hazzard 2006-07-13 17:51:16 UTC
While I do believe the previous unlimited approach could be a DoS. Nice catch. 
I have to believe the "one size limit fits all" approach will not work for all 
the existing applications in the world.   However, shouldn't we add a 
SSLMaxIOBuffer directive instead of hardcoding the value at build time?  This 
way any pre-built server or existing applications have a way to raise/lower 
this value as needed for any given virtual host or directory?
Comment 9 Peter Wagemans 2006-07-14 07:33:21 UTC
> But you should really design your site to ensure that the first
> request to a client-cert-protected area is not a POST request with a
> large body; make it a GET or something.

Not really an option with SOAP.

> I have to believe the "one size limit fits all" approach will not
> work for all the existing applications in the world.

Agreed.

> However, shouldn't we add a SSLMaxIOBuffer directive instead of
> hardcoding the value at build time?

That is a good way to remove the hard-coded limit. But is there a
reason why one could not use the existing directive LimitRequestBody,
as suggested above? It can be set for the client cert protected area
and then defines the size of requests that should be handled, thus the
amount that should be allowed to be buffered.
Comment 10 Joe Orton 2006-07-14 09:22:44 UTC
I'm fairly reluctant to add a config directive for this.  I would be happy with
a "one size fits most" hard-coded limit if we could arrive at such a value; what
is your input on changing the limit to 256K?  Would that be sufficient or not?

Overloading LimitRequestBody for such a purpose is not acceptable, no - the
default is unlimited.
Comment 11 powell hazzard 2006-07-14 14:00:00 UTC
> I'm fairly reluctant to add a config directive for this.  

   I can understand your point of view. 

> I would be happy with a "one size fits most" hard-coded limit if we could 
arrive at such a value; what is your input on changing the limit to 256K? 

IMHO

   Since I work in support/engineering I can honestly say we have customers 
that are using soap messages anywhere from 1k to 40mb in size (or higher).  
So, if you are asking for my input regarding any hard coded value I would have 
to vote for the 40mb-50mb range.  While I agree those values seem absurd for 
most small web site, but large SOAP web sites will need this type of limit out 
of the box.  Without a large hard-coded value customers are going to request 
vendors like RedHat to give them a supported version of the Apache web Server 
with a higher hard coded value because their existing applications that have 
been deployed all over the world just stopped working when they installed 
http://www.linuxsecurity.com/content/view/120313.  (I’ve already had three 
customers)

Comment 12 Peter Wagemans 2006-07-14 14:18:06 UTC
> what is your input on changing the limit to 256K?  Would that be
> sufficient or not?

No. We're looking at megabyte SOAP POSTs.

> Overloading LimitRequestBody for such a purpose is not acceptable,
> no - the default is unlimited.

With that overload idea, the default value of zero (unlimited) would
be translated to the hard-coded value to protect against DOS attempts.
Defining a positive size for LimitRequestBody would allow that size to
be buffered for POSTs in mod_ssl (because it seems sensible to keep
functioning up to the specified limit). I had something along these
lines in mind:

--- httpd-2.0.46/modules/ssl/ssl_engine_io.c.old ...
+++ httpd-2.0.46/modules/ssl/ssl_engine_io.c.new ...
@@ -1395,8 +1395,17 @@
     struct modssl_buffer_ctx *ctx;
     apr_bucket_brigade *tempb;
     apr_off_t total = 0; /* total length buffered */
+    apr_off_t max_ssl_buffered = 0; /* Maximum allowed memory buffering of ssl
data. */
     int eos = 0; /* non-zero once EOS is seen */
     
+    max_ssl_buffered = ap_get_limit_req_body( r );
+
+    if (max_ssl_buffered == 0) { 
+      /* If undefined/unlimited, use default limit to defend against
+       * DOS attempts. */
+      max_ssl_buffered = SSL_MAX_IO_BUFFER;
+    }
+
     /* Create the context which will be passed to the input filter. */
     ctx = apr_palloc(r->pool, sizeof *ctx);
     ctx->bb = apr_brigade_create(r->pool, c->bucket_alloc);
@@ -1460,7 +1469,7 @@
                       total, eos);
 
         /* Fail if this exceeds the maximum buffer size. */
-        if (total > SSL_MAX_IO_BUFFER) {
+        if (total > max_ssl_buffered) {
             ap_log_rerror(APLOG_MARK, APLOG_ERR, 0, r,
                           "request body exceeds maximum size for SSL buffer");
             return HTTP_REQUEST_ENTITY_TOO_LARGE;

Comment 13 Peter Wagemans 2006-07-17 09:25:44 UTC
Here's an afterthought to the above patch to allow LimitRequestBody to
control the size of the SSL buffer. When doing this, it may be a good
idea to refer to the controlling directive in the error message and
change

"request body exceeds maximum size for SSL buffer"

into, for instance,

"request body exceeds maximum size for SSL buffer; try LimitRequestBody > 0"
Comment 14 William A. Rowe Jr. 2006-07-23 20:04:56 UTC
We can allow to to configure this to be larger at a serious cost to how
many requests you can process.

The obvious answer for an 'upload' style operation is to ensure they never
hit your upload page without going through a simpler front page which first
enforces the renegotation.  This can be your upload form page.

Once the session is SSLClientVerify'ed it won't renegotate -again- so this
problem won't occur.

No matter what -we- do, if you design your huge-post page such that it won't
cause renegotiation on large posts, your server will always have less stress.
And that's a good thing IMHO.  2GB set asides are absurd, but pushing up a
2GB iso image isn't inconcievable.
Comment 15 Doncho N. Gunchev 2006-07-24 08:22:42 UTC
While 256K suits our needs for now (I did recompile and it worked), tomorrow 
we'll have to post larger scanned documents (say 512K), some time later even 
larger. My apache is just a reverse proxy with SSL client authentication, so 
an option would be better or I'll have to recompile every change/update...
Comment 16 Peter Wagemans 2006-07-24 11:37:21 UTC
Will Rowe wrote:

> The obvious answer for an 'upload' style operation is to ensure they
> never hit your upload page without going through a simpler front
> page which first enforces the renegotation.  This can be your upload
> form page.

> Once the session is SSLClientVerify'ed it won't renegotate -again-
> so this problem won't occur.

This can work for interactive applications, but there are common
situations without upload page: an application that wants to submit
data to the web server in a SOAP POST request.


Note: the above proposal for using an upload page request to
renegotiate for the client certificate appears to work only with
"SSLVerifyClient none" but not with "SSLVerifyClient optional" at top
level. In the last case a renegotiation is performed on the subsequent
form POST even when a client certificate is already present. Thus you
again run into the 128K limit. This is probably explained by the
following code in ssl_engine_kernel.c, which only treats "none" as a
special case:

        /* optimization */

        if ((dc->nOptions & SSL_OPT_OPTRENEGOTIATE) &&
            (verify_old == SSL_VERIFY_NONE) &&
            ((peercert = SSL_get_peer_certificate(ssl)) != NULL))
        {
            renegotiate_quick = TRUE;
            X509_free(peercert);
        }
Comment 17 smitha.jasti 2006-11-19 22:51:05 UTC
Hi,

I am too am facing problem due to the fixed buffer size. I saw the suggestion 
about adding a directive to modify the buffer size as needed. Has there been 
any work done on this regard? Any other suggestion about how this problem 
could be fixed?
Comment 18 Ronald van Kuijk 2007-01-20 07:38:34 UTC
We are currently in the process of getting this 'fixed' via Red Hat commercial
support (which we have). The fix of Peter Wagemans will probably be extended a
little and hopefully checked in.
Comment 19 Kevin 2007-03-03 09:01:16 UTC
I'm hitting this limit too, using apache 2.2.3, built from a standard
Gentoo-based ebuild.

I'm using Apache here in this context as a front-end to Zope/Plone, and plone
offers the user the option of uploading content.  This content has no inherent
plone-based size limits.  So in my case, if I use SSL to secure my sites (which
I do), and I use Apache as a front-end, as described in several places in plone
documentation, two of which are here:
http://plone.org/documentation/tutorial/plone-apache
http://plone.org/documentation/how-to/apache-ssl

...and I upload large files, then I get nailed by this limit.  Has any further
work been done with this in 2.2?

What info is still needed to resolve this bug?
Comment 20 Kevin 2007-03-03 16:50:51 UTC
Sorry.  I should have added above that there are no client certificates involved
in these uploads.  I'm not savvy enough about the internals of either apache or
plone to know, but I suppose that means it's possible that what I'm seeing is
not actually this bug, but the behavior of my systems match the symptoms in
every way except for the involvement of client certificates, so to me that means
that if they are not the same, then they are at least, very probably strongly
associated with each other.

When I upload files 128kb and smaller, it works as expected.  When I attempt to
upload files 129kb and larger, I get this:

Error message in browser:
Title: 413 Request Entity Too Large
Page: Request Entity Too Large
The requested resource
/Members/admin/portal_factory/Image/image.2007-03-03.9545920618/atct_edit
does not allow request data with POST requests, or the amount of data provided
in the request exceeds the capacity limit.

Error message in logs:
[Sat Mar 03 19:26:35 2007] [error] [client xxx.yyy.zzz.ttt] request body exceeds
maximum size for SSL buffer, referer:
https://www.example.com/Members/admin/portal_factory/Image/image.2007-03-03.9545920618/edit
[Sat Mar 03 19:26:35 2007] [error] [client xxx.yyy.zzz.ttt] could not buffer
message body to allow SSL renegotiation to proceed, referer:
https://www.example.com/Members/admin/portal_factory/Image/image.2007-03-03.9545920618/edit

I've spoken with someone on the plone list who's using RHEL and apache/ssl/plone
in the same manner that I am, and he reports not suffering from this problem. 
I'm not sure if he has any upper limit at all, or if the upper limit is simply
larger than 128kb.  I'm still talking with him.

I guess redhat has applied some sort of patch.  Does anyone know about that?  Is
it the same one mentioned in this bug report?  I'd like to have the limit (if it
must exist) be up in the 40-50 MB range myself.  If there's another patch,
perhaps someone could refer me to it?
Comment 21 Kevin 2007-03-04 06:38:23 UTC
[somewhat sheepishly]: After all the discussion, and rereading documentation and
config files and the bug report several times over, I noticed that my apache
server config file used the SSLVerifyClient Directive at level "optional" and
that the documentation states, "In practice only levels 'none' and 'require' are
really interesting, because level 'optional' doesn't work with all browsers".  I
was also using the SSLVerifyDepth Directive at a depth number of 1.

By commenting out these two directives, I solved the problem.

When I remarked earlier that client certificates were not involved at all, I
mistakenly considered only what was going on with the client, failing to
consider client certificate directives on the server. Apologies if I should have
thought of that sooner, and if I generated a lot of commotion over nothing.
Comment 22 Raman Gupta 2007-07-15 09:47:47 UTC
I am running httpd 2.2.3 on CentOS 5.

This problem also affects SugarCRM attachment uploads. The login page for
SugarCRM uses a GET request, so the renegotiation should be fine, but users
report that sometimes the attachment upload still fails with this error. Perhaps
the client-certificate SSL session times out or something, which forces httpd to
renegotiate again? If so, this is yet another use case that supports adding a
configurable per-location buffer directive.

I can find no work-around for this other than setting "SSLVerifyClient require"
at the virtual host level. However, we have good reasons to *not* set
"SSLVerifyClient require" at the virtual host level, since many of our SSL
services do not require client certs. As stated in the docs, "SSLVerifyClient
optional" doesn't work for all clients (e.g. WebDAV on win2k for one).
Comment 23 Raman Gupta 2007-07-15 10:25:31 UTC
(In reply to comment #22)
> Perhaps the client-certificate SSL session times out or something, which
> forces httpd to renegotiate again? If so, this is yet another use case that
> supports adding a configurable per-location buffer directive.

I confirmed that the SSLSessionCacheTimeout affects renegotiation. Therefore, at
least for interactive applications where the upload form uses a GET request, I
believe this issue can be worked around by setting SSLSessionCacheTimeout to a
value at least as large as the application session timeout. The default of 300
on CentOS 5 was easily exceeded by a user who is uploading an attachment, while
also filling in associated description and other form fields before clicking Submit.

> As stated in the docs, "SSLVerifyClient optional" doesn't work for all
> clients (e.g. WebDAV on win2k for one).

Correction: I'm not sure about WebDAV on win2k working with optional or not --
the test I did earlier was incorrect. However, the point stands since I do not
want clients to be prompted for certificates anyway.
Comment 24 Krzysiek Pawlik 2008-02-05 13:08:46 UTC
Created attachment 21473 [details]
httpd-2.2.8-ssl-io-buffer.patch

This patch adds SSLMaximumBufferSize tunable - it's global for whole module.
Defaults to 0, which means to use the default 128k limit.
Comment 25 Nico Weichbrod 2008-03-10 05:20:20 UTC
(In reply to comment #24)
> Created an attachment (id=21473) [details]
> httpd-2.2.8-ssl-io-buffer.patch
> 
> This patch adds SSLMaximumBufferSize tunable - it's global for whole module.
> Defaults to 0, which means to use the default 128k limit.

thanks, works well for me. I used the patch on debian apache2-2.2.3 source.
After one week still no problems with the patch. Thanks for your work
Comment 26 Peter Wagemans 2008-03-11 03:24:14 UTC
I'm wondering about the reasons for a patch with a configurable global
limit (comment #24) instead of the patch described in comments #12 and
#13 which uses the existing LimitRequestBody directive to give control
over the SSL maximum buffer size at the level of server config,
virtual host, directory or .htaccess.

Is there perhaps a reason for keeping the SSL maximum buffer size
smaller than a configured maximal request size?

Comment 27 Miles Crawford 2008-10-21 10:58:36 UTC
I hope I am not overstepping my bounds, but I'm trying to increase the severity of this issue and bring some attention back to it.

A virtualhost that mixes browser-based content and REST/SOAP services is not uncommon.

This hardcoded buffer makes the REST/SOAP activities likely to fail.

I am not sure why Joe marked this as needs info - what info is needed?
Comment 28 Joe Orton 2008-12-12 12:21:32 UTC
I've added a SSLRenegBufferSize directive in r726109 to make this buffer size configurable.  Thanks for all the feedback.
Comment 29 Ronald van Kuijk 2008-12-13 13:03:24 UTC
Thanks.... finally an official fix. Still I'm curious about why an additional directive... see also comment #26
Comment 30 Joe Orton 2008-12-16 03:19:41 UTC
(In reply to comment #29)
> Thanks.... finally an official fix. Still I'm curious about why an additional
> directive... see also comment #26

Overloading the LimitRequestBody semantics would not be appropriate because it could make existing configurations do something completely surprising.
Comment 31 tlhackque 2009-01-10 07:26:36 UTC
Unfortunately I found this report after posting https://issues.apache.org/bugzilla/show_bug.cgi?id=46508

Joe's patch in https://issues.apache.org/bugzilla/show_bug.cgi?id=39243#c28 >almost< works.  

See https://issues.apache.org/bugzilla/show_bug.cgi?id=46508 for the issue, a fix, and a backport to 2.2.  Hopefully he'll make it official.

Hope this helps anyone else who tries to use or backport as-is.

(I didn't re-open this report since 46508 has all the detail and is open.)
Comment 32 Ruediger Pluem 2009-01-11 05:44:00 UTC
Proposed for backport as r733472.
Comment 33 Peter Wagemans 2009-01-22 05:23:27 UTC
Thanks very much for the official fix.

One remark on comment #30 from Joe Orton.

> Overloading the LimitRequestBody semantics would not be appropriate
> because it could make existing configurations do something
> completely surprising.

A separate directive is the best solution, but all the old patch in
comments #12,#13 does is allow requests up to the configured
LimitRequestBody value (if set > 0), also for requests > 128k. For me
that's not a "completely surprising" behaviour change.
Comment 34 Joe Orton 2009-01-22 07:44:30 UTC
My point was that e.g. if someone had LimitRequestBody set to 100mb for some PHP script they were using in the same context, and mod_ssl started buffering up to 100mb of request body into RAM per process -- from untrusted users -- that would allow a DoS attack against the server.  This would be surprising ;)
Comment 35 Peter Wagemans 2009-01-22 09:51:29 UTC
Yes, that's the side effect of the memory buffering. Explicitly
allowing large POSTS and doing client certificate authentication is
probably a rare combination for untrusted users.

Anyway, I'm glad it is fixed after using the old workaround patch for
2.5 years. Thanks.
Comment 36 Carmen Alvarez 2009-03-25 07:05:27 UTC
To overcome this limit in version 2.0.x, it looks like the options are either backporting the fix to 2.0, or recompiling the standard 2.0 source with " ./configure CPPFLAGS=-DSSL_MAX_IO_BUFFER=256000".   The latter option seems the simplest.  If I do this, which files will this modify?  Is it just the mod_ssl.so file, or are other files impacted?
Comment 37 Pieter Ennes 2009-07-22 15:05:03 UTC
Note that I found out that the workaround mentioned in comment #22, to place the SSLVerifyClient directive outside of the Directory section, inside the main Vhost container, *inactivated* the complete client cert verification part, i.e. any client could connect. Moving the statements back into the <Directory> restored security. I'll see if I can isolate it and file another bug.
Comment 38 Pieter Ennes 2009-07-23 01:32:07 UTC
Note to my last comment, I was stung by this issue:
https://issues.apache.org/bugzilla/show_bug.cgi?id=12355
So take care...
Comment 39 Ruediger Pluem 2009-11-26 00:38:04 UTC
See SSLRenegBufferSize for adjusting the buffersize (http://httpd.apache.org/docs/2.2/mod/mod_ssl.html#sslrenegbuffersize)
Comment 40 Ryan 2013-03-14 15:44:03 UTC
It seems curl solves this issue by sending the HTTP 1.1 header "Expect" with value "100-continue" when you use a client certificate to perform an HTTP PUT of a huge file.  Apparently this allows the SSL renegotiate to occur before the large payload (body) is even transferred.  I noticed this due to my Java client failing with the error discussed in this thread, but curl (and libcurl) had no issues.  Unfortunately the Sun Java built-in HttpsUrlConnection implementation doesn't support the "Expect" header.  Use the Java Apache Foundation HTTP Client library or OpenJDK implementation instead.  Hope this helps someone who digs up this thread.