Summary: | Apache2 exit signal Segmentation fault (11) apr_bucket_free | ||
---|---|---|---|
Product: | Apache httpd-2 | Reporter: | Nick <nickdragomir> |
Component: | mod_fcgid | Assignee: | Apache HTTPD Bugs Mailing List <bugs> |
Status: | RESOLVED DUPLICATE | ||
Severity: | blocker | ||
Priority: | P2 | ||
Version: | 2.2.20 | ||
Target Milestone: | --- | ||
Hardware: | PC | ||
OS: | Linux |
Description
Nick
2011-12-04 04:22:29 UTC
Could I get an answer on this one please? I can't upload any large files without crashing Apache2.2.20. Thank you. Nick. What version of apr, and apr-util? libapr1 Version: 1.4.5-1 libaprutil1 Version: 1.3.12+dfsg-2 I have the exact same problem. I can upload files of around 500MB without problems. But when I come close to or over 1GB apache dies as soon as the upload is at ~99% with a segmentation fault. When this happens a new apache process is taking over the upload and a new temp file is created (filling up again to ~99% of the original file). I could let it run forever until my /tmp dir is full of those almost finished uploads. My system: Gentoo Linux x86_64 with kernel 3.2.12 apache-2.2.22 with worker mpm apr-1.4.5 aprutil.1.3.12 mod_fcgid-2.3.6 php-5.3.10 This sounds very similar to what I found in bug 51747 (especially if combined with bug 51749, which exacerbates it) - that the memory usage shoots up at the end of the client data transfer, as the request is read back and handed off to the backend. The comment 'The generated core file is around 3GB ...' would support this as the cause, as on 32-bit, the process will run out of addressable memory around this point. Have to agree with Dominic here that this is simply a symptom of the larger issue already reported. What could help was a stack trace with mod_fcgid built with the -g option (building mod_fcgid against httpd built with maintainer mode should automatically set those flags using that maintainer mode apxs command). *** This bug has been marked as a duplicate of bug 51747 *** |