Summary: | large files / filesystem corruption can cause apache2 to eat up all available memory | ||
---|---|---|---|
Product: | Apache httpd-2 | Reporter: | Tyler "Crackerjack" MacDonald <tyler> |
Component: | Core | Assignee: | Apache HTTPD Bugs Mailing List <bugs> |
Status: | REOPENED --- | ||
Severity: | enhancement | Keywords: | PatchAvailable |
Priority: | P2 | ||
Version: | 2.2.2 | ||
Target Milestone: | --- | ||
Hardware: | PC | ||
OS: | Linux | ||
Attachments: | Add a "LimitResponseFileSize" directive for default_handler to use. |
Description
Tyler "Crackerjack" MacDonald
2006-06-13 22:22:12 UTC
The out of memory is from trying to split the huge file into buckets of AP_MAX_SENDFILE size to sendfile each section of the file. I don't believe there is anything we can do in this case, except to call the OOM abort function in APR, which has already been done in trunk. If you had a 64bit machine/OS/httpd (or where sizeof(apr_off_t) <= sizeof(apr_size_t)), I believe it would work since it would attempt to sendfile() it all in one bucket, rather than millions AP_MAX_SENDFILE size buckets. I don't think it's unreasonable to expect that the server should either handle this situation correctly or at least fail gracefully without attempting to eat all your RAM. It would be possible to handle this quite correctly by storing an "insanely large file" in a bucket with unknown length. This could be done either as a modification to the file bucket or, I suppose, as a new bucket type. httpd could handle such a bucket without problems; apr_brigade_insert_file could do whatever is necessary to insert it. But I'm not entirely convinced it would be worth all the effort to fix this properly unless someone really wants to serve insanely large files with 32-bit httpd binaries. By my calculations apr_brigade_file_insert() will be using 36Mb of RAM in bucket structures per petabyte of file. The server has effectively DOSed itself. A 500 response would be a sensible alternative. How to detect when to bail out? Maybe a LimitResponseSize directive (with a default value of AP_MAX_SENDFILE) would make sense as a sanity check? Created attachment 18464 [details]
Add a "LimitResponseFileSize" directive for default_handler to use.
This patch implements Nick's suggestion to have a config directive that limits
the maximum size of a file we will send.
I named the directive "LimitResponseFileSize" instead of "LimitResponseSize",
because the directive only affects default_handler; other handlers could still
send more information than this in a response.
Might not be obvious, we have a keyword rather than changing subject ;-) ahh, didn't catch that. thanks. :-) Please let's not add yet-another-config-directive just to paper over a bug. 1) this doesn't catch all the places where such files are added to brigades 2) there is more code added there than would be necessary to actually correctly handle such files 3) enforcing a 16Mb default response size is... unwise ;) I didn't think it would be that easy either... nothing's that easy. :) You're suggesting a "bucket with unknown length"... I'm not really sure what this means or how it'd work but it *sounds* interesting... > 1) this doesn't catch all the places where such files are added to brigades Fair enough > 2) there is more code added there than would be necessary to actually correctly handle such files Well that patch took me 10 minutes to whip up... so I guess that means you can supply a patch to fix this properly in 2 minutes? :) > 3) enforcing a 16Mb default response size is... unwise ;) People shouldn't have filed larger than that on their webservers! HTTP is not a file transfer protocol! ... kidding... I actually had no idea the limit was set that low. But that means that httpd is creating a bunch of these structures whenever it sends out any large file, not just an insanely large one... hmm... I'm going to stop thinking about this now. Well, OK :) I *thought* this would be a 2 minute patch but modifying the FILE bucket to do this actually isn't possible, so my point (2) is out of the window. I still think that adding config directives for this is the wrong approach, however. It's actually annoyingly difficult to fix this in APR-util because apr_brigade_insert_file() cannot assume that the *whole* passed-in file must be inserted; so a new function with those semantics would be needed. |