Summary: | cgi lock when writing to stderr | ||
---|---|---|---|
Product: | Apache httpd-2 | Reporter: | Peter Whiting <pete> |
Component: | mod_cgi | Assignee: | Apache HTTPD Bugs Mailing List <bugs> |
Status: | CLOSED DUPLICATE | ||
Severity: | major | CC: | michael, michaelk, notgod |
Priority: | P3 | ||
Version: | 2.0.39 | ||
Target Milestone: | --- | ||
Hardware: | PC | ||
OS: | All |
Description
Peter Whiting
2002-07-05 17:35:27 UTC
Are you writing a very large quantity of stuff to stderr? This is not a new issue. Linux has a 64kb pipe limit by default. This means that once our stdout or stderr have in excess of 64kb written, before Apache has finished reading the request body and piping it to your CGI's stdin, you will deadlock. Try deferring your stderr logging until the entire body is read and the entire response is written. This is an apache-2.0 only issue. 1.3 does not exhibit this problem, and is seriously hampering development for us. "Deferring your stderr logging until the entire body is read and the entire response is written." doesn't work for large scripts, sites, development environments, etc. Also, this system will hang with as little as 4k of std_err action at leasr on RH 6.soemthing, apache 2.0.40... FreeBSD seems immune to this wackyness.... for example: // Credit to: K.C. Wong #include <stdio.h> #include <time.h> #include <unistd.h> #include <fcntl.h> #define SIZE 4075 void out_err() { char buffer[SIZE]; int i = 0; for (i = 0; i < SIZE - 1; ++i) buffer[i] = 'a' + (char )(i % 26); buffer[SIZE - 1] = '\0'; // fcntl(2, F_SETFL, fcntl(2, F_GETFL) | O_NONBLOCK); fprintf(stderr, "short test\n"); fflush(stderr); fprintf(stderr, "test error=%s\n", buffer); fflush(stderr); } // out_err() int main(int argc, char ** argv) { fprintf(stdout, "Context-Type: text/html\r\n"); fprintf(stdout, "\r\n\r\n"); out_err(); fprintf(stdout, "<HTML>\n"); fprintf(stdout, "<body>\n"); fprintf(stdout, "<h1>hello world</h1>\n"); fprintf(stdout, "</body>\n"); fprintf(stdout, "</HTML>\n"); fflush(stdout); exit(0); } // main() I will preface this with a single observation, we will need to expand the apr_poll api to include other 'objects', notably httpd-2.0 filters. The only way to accomplish this is to provide a set of added callbacks to query the filter chain for a data-ready state, and then add the filter endpoints (e.g. sockets or pipes) to the pollset if there is no data ready from any filter [or other extended] source or ready to send to any destination sink. It is doable, but it is dirty. It's my impression that the default pipe size on Linux had been 64kb. Did this change somewhere along the way? Is it a kernel tuning option or can it be controlled with something like ioctl? If we can make the CGI pipes accept 64kb, at least, this remains a problem for both Win32 and Linux, but at least it's a sane boundry instead of 4kb!!! Hello. I need to give a status update on this issue... is there anyone activly working on/assigned to a resolution for this issue? If not, is there perhaps a way I can escalate the issue in any way? I would offer to help, but I am not much of a developer, but this is a major stopping block for us at this point. Thanks in advance. If I can help in any way, please let me know. Someone asked me if this was an Apache or OS issue; I asked our kernel guys who said: "if nothing is reading the pipe (or the reader is held off from reading by your writer user space locking) then 4K is the internal buffering on a linux pipe. You can't change that. Ok my assessment is that you either hit a weird glibc bug in stdio with pipes or apache 2.x has a deadlock in its own pipe handling" Ran into the same behavior today. After upgrading to Redhat 8, and apache 2.0.40. I just installed 2.0.43 and it exhibits the same lock. Both PERL and C ++ programs exhibit the locking. Locking only happens when writing to STDERR. This is using a stock Redhat 8 install. Can reproduce on apache 2.0.40-8 ( redhat default) and fresh build of 2.0.43. Sample C++ CGI Program: ////////////////////////////////////////////// #include <iostream> #include <sys/types.h> #include <unistd.h> using namespace std; int main() { cout << "Content-Type:text/plain\n\n"; cout << getpid() << endl; sleep (20); // allow me to hook an strace to the pid for (int i=0; i < 1000; i++) { cout << "Here " << i << endl; cerr << "Here " << i << endl; } return 0; } ///////////////////////////////////////////////////// Here is the last of an strace up to hang: write(1, "Here 466\n", 9) = 9 write(2, "H", 1) = 1 write(2, "e", 1) = 1 write(2, "r", 1) = 1 write(2, "e", 1) = 1 write(2, " ", 1) = 1 write(2, "4", 1) = 1 write(2, "6", 1) = 1 write(2, "6", 1) = 1 write(2, "\n", 1) = 1 write(1, "Here 467\n", 9) = 9 write(2, "H", 1) = 1 write(2, "e", 1) = 1 write(2, "r", 1) = 1 write(2, "e", 1 I just installed a copy of apache 1.3.27. The same sample program runs fine. Always completes. On standard RH8 and RH9 installations this script causes locking; #!/usr/bin/perl print "Content-type: text/html\n\n"; print "<html><head><title>Locked</title></head><body><h1>Locked</h1></body></html>\n"; print STDERR "This will lock cgi\n" x 220; ## Slightly more than 4k There are no problems on the same installations with apache 1.3. (More information on the same issue: http://nagoya.apache.org/bugzilla/show_bug.cgi?id=22030) the newer PR #22030 has more info, so leave that one open *** This bug has been marked as a duplicate of 22030 *** |