Bug 44494 - Requests greater than 8k being truncated.
Requests greater than 8k being truncated.
Status: RESOLVED FIXED
Product: Tomcat 5
Classification: Unclassified
Component: Connector:Coyote
Nightly Build
Other All
: P2 normal (vote)
: ---
Assigned To: Tomcat Developers Mailing List
:
: 45215 45349 45350 (view as bug list)
Depends on:
Blocks:
  Show dependency tree
 
Reported: 2008-02-26 19:47 UTC by Jonathan Rubin
Modified: 2008-09-24 13:25 UTC (History)
7 users (show)



Attachments
patch for trunk repository (7.39 KB, patch)
2008-03-19 05:10 UTC, Suzuki Yuichiro
Details | Diff
patch for tomcat6.x repository (5.84 KB, patch)
2008-03-19 05:12 UTC, Suzuki Yuichiro
Details | Diff
patch for tomcat5.5 (2.61 KB, patch)
2008-03-19 05:13 UTC, Suzuki Yuichiro
Details | Diff
patch for tomcat5.5 (3.26 KB, patch)
2008-03-19 05:15 UTC, Suzuki Yuichiro
Details | Diff
test case for Reader buffering (3.89 KB, application/octet-stream)
2008-03-22 09:04 UTC, Suzuki Yuichiro
Details
New test application that can check the contents of requested body (4.79 KB, application/octet-stream)
2008-03-24 02:51 UTC, Suzuki Yuichiro
Details
WAR includes other test cases (7.54 KB, application/octet-stream)
2008-03-26 03:38 UTC, Suzuki Yuichiro
Details
patch that make readAheadLimit value effective (2.37 KB, patch)
2008-04-18 21:51 UTC, Suzuki Yuichiro
Details | Diff
Patch for TC 5.5 Part 1 - connectors (CharChunk, B2CConverter) (2.14 KB, patch)
2008-04-28 11:47 UTC, Rainer Jung
Details | Diff
Patch for TC 5.5 Part 2 - container (InputBuffer) (1.05 KB, patch)
2008-04-28 11:48 UTC, Rainer Jung
Details | Diff

Note You need to log in before you can comment on or make changes to this bug.
Description Jonathan Rubin 2008-02-26 19:47:31 UTC
After updating to Tomcat 5.5.26 from Tomcat 5.5.23 we appear to be having trouble with requests, that are greater than 8k, being truncated with the CoyoteRequest.getReader() method.

The following steps caused this issue to appear:
1. I constructed a post request from a jsp page which exceeded 8k in size.
2. In the corresponding servlet I call request.getReader() and pipe the returned BufferedReader into a string.
3. Inspection of the string reveals that the content is  being truncated after 8k.
4. If I replace the request.getReader() method call with request.getInputStream() no truncation occurs. Also, after reverting back to Tomcat 5.5.23 and using the request.getReader() method no truncation occurs.
Comment 1 Filip Hanik 2008-02-26 21:00:48 UTC
Bug Confirmed - regression in 6.0.x as well.

SEVERE: Servlet.service() for servlet jsp threw exception
java.lang.ArrayIndexOutOfBoundsException: 8192
	at org.apache.tomcat.util.buf.CharChunk.substract(CharChunk.java:388)
	at org.apache.catalina.connector.InputBuffer.read(InputBuffer.java:368)
	at org.apache.catalina.connector.CoyoteReader.read(CoyoteReader.java:93)
	at org.apache.jsp.upload_jsp._jspService(upload_jsp.java:83)
	at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:803)
	at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:374)
	at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:337)
	at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:266)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:803)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
	at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
	at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175)
	at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
	at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
	at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
	at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286)
	at org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:879)
	at org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:719)
	at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2080)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:650)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:675)
	at java.lang.Thread.run(Thread.java:595)
Comment 2 Filip Hanik 2008-02-27 12:20:50 UTC
This has been fixed in the trunk, proposals to fix it in 6.0.x and 5.5.x have been added.
Comment 3 Filip Hanik 2008-02-27 12:22:56 UTC
URL to fix
http://svn.apache.org/viewvc?view=rev&revision=631700
Comment 4 Suzuki Yuichiro 2008-03-19 05:03:27 UTC
In some cases, the Fix r631700 seems O.K.
But there are still some troubles. 

1. The truncation still occurs,
   in the case of non default maxHttpHeaderSize values like "49152"
2. The data is broken after BufferedReader's mark(large readAheadLimit) method
   and reset() method are called, though the size is O.K.
3. Garbage remains in the buffer of ReadConvertor that extends InputStreamReader,
   when the broken data was POSTed or the application does not
   read the data completely, especially with the multibyte characters. 
   The data of the next request will be broken.

I applied r631700 to Tomcat6.0.16 and Tomcat5.5.26, and detected them. 

I attach patches.

Comment 5 Suzuki Yuichiro 2008-03-19 05:10:13 UTC
Created attachment 21683 [details]
patch for trunk repository

this patch may fix some buffering bugs.
patch for trunk repos. http://svn.apache.org/repos/asf/tomcat/trunk
Comment 6 Suzuki Yuichiro 2008-03-19 05:12:19 UTC
Created attachment 21684 [details]
patch for tomcat6.x repository

this patch may fix some buffering bugs.
patch for Tomcat6.x repos. http://svn.apache.org/repos/asf/tomcat/tc6.0.x/trunk
Comment 7 Suzuki Yuichiro 2008-03-19 05:13:54 UTC
Created attachment 21685 [details]
patch for tomcat5.5

this patch may fix some buffering bugs.
patch for Tomcat5.5 connector repos. http://svn.apache.org/repos/asf/tomcat/connectors/trunk
Comment 8 Suzuki Yuichiro 2008-03-19 05:15:26 UTC
Created attachment 21686 [details]
patch for tomcat5.5

this patch may fix some buffering bugs.
patch for Tomcat5.5 container repos. http://svn.apache.org/repos/asf/tomcat/container/tc5.5.x
Comment 9 Suzuki Yuichiro 2008-03-19 05:53:43 UTC
Although they may be not perfect possibly,
I tested these patches on some patterns that includes above-mentioned.
Comment 10 Remy Maucherat 2008-03-19 08:38:17 UTC
Not bad, but I have some problem believing that no longer using bb.getLength() as the value for limit (or not using IntermediateInputStream.available() inside convert, which is the same) doesn't cause any problems.
Comment 11 Filip Hanik 2008-03-19 08:58:09 UTC
I tested the patch on 5.5 and it didn't work, I will look deeper into it

Comment 12 Filip Hanik 2008-03-19 09:07:05 UTC
I also tested the trunk patch, and couldn't get it to work properly
Comment 13 Remy Maucherat 2008-03-19 09:55:37 UTC
Just use the 6.0 branch for testing, no need to test all branches, the code is identical in all of them. The key difference I see is that the limit used in the loop is not the same, and you should probably be testing that.
Comment 14 Filip Hanik 2008-03-19 12:25:03 UTC
(In reply to comment #13)
> Just use the 6.0 branch for testing, no need to test all branches, the code is
> identical in all of them. The key difference I see is that the limit used in
> the loop is not the same, and you should probably be testing that.
> 

true, there is a question here, that could/would solve the problem
why is Reader.markSupported() return true, isn't this misleading.
if we are streaming a few megabytes of data, tomcat would have to cache all that data in order to support mark properly.

markSupported()==false, would mean we don't need to support mark() and reset(), and IMHO corresponds more to the streaming implementation we have

Comment 15 Suzuki Yuichiro 2008-03-20 18:26:37 UTC
Remy and Filip, thank you for immediate checks. 

Yes, the logic is same in all of them.

(In reply to comment #11, #12)
> I tested the patch on 5.5 and it didn't work, I will look deeper into it
> I also tested the trunk patch, and couldn't get it to work properly

I'm sorry, I might have made a mistake in something.
I will check them more.

(In reply to comment #14)
> if we are streaming a few megabytes of data, tomcat would have to cache all
> that data in order to support mark properly.

If it says more in detail,
Tomcat would have to cache data only after the mark() method calling, and
Tomcat would have to cache characters up to only the size
that the application specified with the method.

Of course, we can choose to support these methods or not to support.
I think it's wonderful if Tomcat can support that.
And I'm believing that the implementation of Tomcat has enough possibility.

Comment 16 Suzuki Yuichiro 2008-03-20 19:57:35 UTC
(In reply to comment #10)

> Not bad, but I have some problem believing that no longer using bb.getLength()
> as the value for limit (or not using IntermediateInputStream.available() inside
> convert, which is the same) doesn't cause any problems.

Is your problem solved by followings?

bc.getLength() is called // bc in IntermediateInputStream means bb.
at IntermediateInputStream.available()
at StreamDecoder$CharsetSD.inReady()
at StreamDecoder$CharsetSD.implRead(char[], int, int)
at StreamDecoder$CharsetSD(StreamDecoder).read(char[], int, int)
at ReadConvertor(InputStreamReader).read(char[], int, int)
at ReadConvertor.read(char[], int, int)
at B2CConverter.convert(CharChunk, int)
at B2CConverter.convert(ByteChunk, CharChunk, int)
at B2CConverter.convert(ByteChunk, CharChunk, int)
at InputBuffer.realReadChars(char[], int, int)
at CharChunk.substract(char[], int, int)
at InputBuffer.read(char[], int, int)
at CoyoteReader.read

I saw that with debugger.

And, IntermediateInputStream.read ( means bb.subStruct() )
will return -1 if the requested byte stream is in the end.

As one of the possibilities that the problem occurs,
the ReadConvertor object that extends InputStreamReader
is recycled and reused after the underlying input stream returns -1,
though it works with no trouble in my environment.

If it is not guaranteed, we might have to create a new ReadConvertor object
at each request without recycling used object.
Comment 17 Filip Hanik 2008-03-21 08:48:11 UTC
(In reply to comment #15)
> Remy and Filip, thank you for immediate checks. 
> 
> Yes, the logic is same in all of them.
> 
> (In reply to comment #11, #12)
> > I tested the patch on 5.5 and it didn't work, I will look deeper into it
> > I also tested the trunk patch, and couldn't get it to work properly
> 
> I'm sorry, I might have made a mistake in something.
> I will check them more.
> 
> (In reply to comment #14)
> > if we are streaming a few megabytes of data, tomcat would have to cache all
> > that data in order to support mark properly.
> 
> If it says more in detail,
> Tomcat would have to cache data only after the mark() method calling, and
> Tomcat would have to cache characters up to only the size
> that the application specified with the method.
yes, only after mark has been called, but if you set a readAheadLimit of 10MB, one has to cache that.
> 
> Of course, we can choose to support these methods or not to support.
> I think it's wonderful if Tomcat can support that.

yes, it's not a question about supporting or not supporting, it's where in the code you support that. with the bytechunk/charchunk the byte arrays referenced left and right, it makes it so much harder. I'll create a little POC to show how simpl it it is
> And I'm believing that the implementation of Tomcat has enough possibility.
> 

goes against the KISS principle :)
Comment 18 Filip Hanik 2008-03-21 10:24:06 UTC
I posted to the dev lists about this, basically, one can just take advantage of java.io.BufferedReader to do the caching

    public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {
        HttpServletRequest req = (HttpServletRequest)request;
        HttpServletRequestWrapper wrapper = new HttpServletRequestWrapper(req) {
            BufferedReader reader = null;
            public BufferedReader getReader() throws IOException {
                if (reader==null) {
                    BufferedReader bf = super.getReader();
                    reader = new BufferedReader(bf);
                }
                return reader;
            }
        };
        chain.doFilter(req, response);
    }

Comment 19 Remy Maucherat 2008-03-21 11:59:45 UTC
Did you really need to get three emails posted on the mailing list to express this idea ?

Personally, I would be interested to know exactly which cases are not working at the moment after applying:
  Index: java/org/apache/catalina/connector/InputBuffer.java
===================================================================
--- java/org/apache/catalina/connector/InputBuffer.java	(revision 633279)
+++ java/org/apache/catalina/connector/InputBuffer.java	(working copy)
@@ -355,7 +355,7 @@
         }
 
         state = CHAR_STATE;
-        conv.convert(bb, cb, len);
+        conv.convert(bb, cb, bb.getLength());
         bb.setOffset(bb.getEnd());
 
         return cb.getLength();

This includes no mark usage and with marks (which can probably be tested using readLine).Could I get some test cases ?
Comment 20 Suzuki Yuichiro 2008-03-22 09:04:52 UTC
Created attachment 21700 [details]
test case for Reader buffering

war for the test
Comment 21 Suzuki Yuichiro 2008-03-22 09:54:44 UTC
(In reply to comment #16)

I tested some cases with http://svn.apache.org/repos/asf/tomcat/tc6.0.x/trunk
at r639909 (that includes 639891) of course without my patch.

I attached a WAR,
that is not a practicable application, but for test.

It includes multibyte UTF-8 characters,
I'm worry about it might not work correctly in your environment.

/index.jsp : using 3 bytes characters


The results I tested:
 read(): expected 24587
    Sometimes, more/less length occurs.
 read(char[1]): expected 8203
    Sometimes, more/less length occurs.
 readLine(): expected 8203
    Sometimes, more/less length occurs,
    or sometimes following exception occurs.

 java.io.IOException
  at org.apache.catalina.connector.InputBuffer.reset(InputBuffer.java:456)
  at org.apache.catalina.connector.CoyoteReader.reset(CoyoteReader.java:134)
  at org.apache.catalina.connector.CoyoteReader.readLine(CoyoteReader.java:191)
  at org.apache.jsp.readLine_jsp._jspService(readLine_jsp.java:60)
 ...

That's very rare when the browser is on the same machine with tomcat.
But if the browser is on another machine that occurs more easily. 

I just guess those behavior may depend on the internal buffer state
and the buffer of SocketInputStream might be related.
(the next data arrived or not?)


These tests are passed with my patch in my environment.
Does the attached patch (at least for tomcat6.x) not work?


(In reply to comment #18)
I have not tried yet. 
I think it is needed at least that the read(char[], int, int) method of
underlying-wrapped BufferedReader work correctly.
Which revision is suitable for the Filter?

All of Tomcat6.0.16, Tomcat6.0.16+r631700(see Comment #4 1.),
and r639891(above-mentioned) seems working incorrectly.
Are they problems of my environment?
Comment 22 Suzuki Yuichiro 2008-03-24 02:51:07 UTC
Created attachment 21705 [details]
New test application that can check the contents of requested body

The attached WAR on Comment #20 works only with current IE.
This new one works with IE7, FireFox, and Netscape7.1
and this one can check whether the contents of the requested body are appropriate or not, too.
Comment 23 Remy Maucherat 2008-03-25 18:27:12 UTC
Ok, first of all thanks a lot for the test case (I wouldn't be able to write a good multibyte test to save my life).

As with the original bug, the problem is with the limit being incorrect (reading too much causes causes the ByteBuffer to refill itself magically, while the calling code still hasn't seen anything back). In the loop, the limit int represents bytes, but is decremented with the amount of chars read (which works with single byte charsets, obviously ...).

The new loop would be (replacing the old convert method):

    public void convert(ByteChunk bb, CharChunk cb, int limit)
    throws IOException {
        iis.setByteChunk(bb);
        try {
            // read from the reader
            int l = 0;
            while( limit > 0 ) { // conv.ready() ) {
                int size = limit < BUFFER_SIZE ? limit : BUFFER_SIZE;
                l = bb.getLength();
                int cnt=conv.read( result, 0, size );
                if( cnt <= 0 ) {
                    // End of stream ! - we may be in a bad state
                    if( debug>0)
                        log( "EOF" );
                    //		    reset();
                    return;
                }
                if( debug > 1 )
                    log("Converted: " + new String( result, 0, cnt ));

                cb.setLimit(cb.getStart() + cnt);
                cb.append( result, 0, cnt );
                limit = limit - (l - bb.getLength());
            }
        } catch( IOException ex) {
            if( debug>0)
                log( "Reseting the converter " + ex.toString() );
            reset();
            throw ex;
        }
    }
Comment 24 Filip Hanik 2008-03-25 19:04:21 UTC
when I ran the test:

it fails consistently with tomcat/6.0.x/trunk

after updating tomcat/trunk to the latest(same fix as in 6.0.x)it also fails consistently.

If I revert to revision 640451 in tomcat/trunk, the test seems to work fine


How do I make this error happen and how do I spot the error?
Do I need to configure any buffer sizes?

Filip
Comment 25 Remy Maucherat 2008-03-25 19:34:24 UTC
The first step is to reproduce the problem (which is easy for me with TC 6.0 trunk and the latest version of the test case). After fixing the update of the limit (because substracting cnt is obviously wrong if limit is inited to bb.length()), then my traces in no longer misbehave and I haven't been able to reproduce any problem. This makes perfect sense to me since with 1 byte charsets (such as regular ascii) cnt is equal to the amount of bytes consumed, and character input was working fine for me after applying my one line fix.
Comment 26 Suzuki Yuichiro 2008-03-25 21:11:57 UTC
(In reply to comment #23)

> cb.setLimit(cb.getStart() + cnt);
> cb.append( result, 0, cnt );

cb.append() doesn't modify cb.start
and cb.start is set zero by InputBuffer#realReadChars if markPos == -1.
So "cb.getStart() + cnt" is same to or less than bb.length(),
and it will be bb.length()/3 if all characters are 3 bytes.
(the surplus bytes may exist in the buffer of ReadConvertor)

This will break the buffer of CharChunk.

If "cb.getEnd() + cnt" is used as alternative,
it will overwrite the limit that is set by CoyoteReader#mark method.
I guess that may cause another problem.
(though mark/reset methods have not worked correctly before that)

Yuichiro
Comment 27 Suzuki Yuichiro 2008-03-26 03:38:11 UTC
Created attachment 21717 [details]
WAR includes other test cases

All of the test cases described in comment #4 is covered, if no bugs are in this WAR. 
For the case 1, define maxHttpHeaderSize="xxxxx" as the attribute of Connector tag in server.xml
Comment 28 Remy Maucherat 2008-03-26 05:44:40 UTC
(In reply to comment #26)
> > cb.setLimit(cb.getStart() + cnt);
> > cb.append( result, 0, cnt );
> 
> cb.append() doesn't modify cb.start
> and cb.start is set zero by InputBuffer#realReadChars if markPos == -1.
> So "cb.getStart() + cnt" is same to or less than bb.length(),
> and it will be bb.length()/3 if all characters are 3 bytes.
> (the surplus bytes may exist in the buffer of ReadConvertor)

That's not the fix, that's something bad that I left in (the purpose was to make sure that cnt could be appended, so it should be getEnd + cnt, if getEnd + cnt > getLimit). I do not see why changing the limit to something greater could cause problems with marking.

As far as I am concerned, this bug is fixed. If you think it's not, feel free to spend time on it and propose alternate fixes.
Comment 29 Mark Thomas 2008-03-30 14:19:03 UTC
I have tested Remy's patch (less the one bad line) and it does fix the various multi-byte read test cases.

The patch has been applied to trunk and proposed for 6.0.x and 5.5.x.

mark/reset is still broken.
Comment 30 Mark Thomas 2008-04-05 10:49:43 UTC
The multi-byte read patch has been applied to 6.0.x and will be in the next release.
Comment 31 Remy Maucherat 2008-04-07 08:10:29 UTC
I found a problem with the buffer resizing in conjunction with mark/reset.

Index: java/org/apache/tomcat/util/buf/CharChunk.java
===================================================================
--- java/org/apache/tomcat/util/buf/CharChunk.java	(revision 645466)
+++ java/org/apache/tomcat/util/buf/CharChunk.java	(working copy)
@@ -480,7 +480,7 @@
 	    tmp=new char[newSize];
 	}
 	
-	System.arraycopy(buff, start, tmp, start, end-start);
+	System.arraycopy(buff, 0, tmp, 0, end);
 	buff = tmp;
 	tmp = null;
     }

Alternately, a buffered reader could be used (since the decoding code is now fixed, there should be no problem).
Comment 32 Mark Thomas 2008-04-07 14:27:43 UTC
The mark/reset test case still fails. I'm looking into it but it is taking me time to figure out exactly where the error is.
Comment 33 Mark Thomas 2008-04-07 15:50:41 UTC
I have a fix for this now. Thsi fix and Remy's fix above have been proposed for 6.0.x and 5.5.x
Comment 34 Rainer Jung 2008-04-10 01:59:00 UTC
I tested recent trunk and also 6.x with the latest STATUS file patches for this issue applied.

When we change maxHttpHeaderSize from the default to something bigger, we still have a problem with POST bodies bigger than 8K. Using smaller maxHttpHedaerSize than the default 8KB seems to be no problem, bigger sizes like 16KB or 48KB show the problem.

We simply use a text file of some size between 16KB and 64KB, send it via POST using e.g. "curl --data-binary @FILE" or "ab -p FILE" and try to read via a simple JSP:

<html>
Size: <%= request.getContentLength() %><br>
<%
out.println("Start reading<br>");
try {
    String line = null;
    int n = 0;
    int m = 0;
    java.io.BufferedReader br = request.getReader();
    while ((line = br.readLine()) != null) {
        n=n+1;
        m=m+line.length()+1;
        out.println(line);
    }
    out.println("Nothing more available, lines=" + n + ", bytes=" + m);
} catch(Exception ex) {
    out.println(ex);ex.printStackTrace();
}
out.println("<br>Done");
%>
</html>

It's primitive and the byte count shown below is wrong if the file has DOS line endings, but the failure is shown clearly, because when truncation appears, you'll be shown less than 8KB of data.
Comment 35 Remy Maucherat 2008-04-10 08:05:13 UTC
You've got to be able to reproduce this using the test webapp provided in this report (the line length is configurable, and there's a readLine test). I don't like a test using ab.
Comment 36 Rainer Jung 2008-04-10 08:15:03 UTC
I'll try that.

For the sake of completeness, here's the exception stack:

java.io.IOException
        at org.apache.catalina.connector.InputBuffer.reset(InputBuffer.java:475)
        at org.apache.catalina.connector.CoyoteReader.reset(CoyoteReader.java:134)
        at org.apache.catalina.connector.CoyoteReader.readLine(CoyoteReader.java:191)
        at org.apache.jsp.maxpost_jsp._jspService(maxpost_jsp.java:65)
        at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
        at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:369)
        at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:337)
        at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:266)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
        at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
        at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175)
        at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
        at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
        at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:568)
        at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
        at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286)
        at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:844)
        at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
        at java.lang.Thread.run(Thread.java:595)

The same test (curl/ab) works for 6.0.14 and for default maxHttpHeaderSize. Will report about the war soon.
Comment 37 Rainer Jung 2008-04-10 08:24:21 UTC
Using the war and maxHttpHeaderSize="16384" I did the following test case:

   Characters size: 17000_______________
   Use MultiByte Character: ( ) yes (*) no
     _____________________________________________________________________________________________________________

   [test case:]
   (*) only read [readLine()/read()/read(char[1])]
   ( ) garbage in buffer
     * real read size: 4096________________

   ( ) mark/reset
     * read size before mark(): 0___________________
     * readAheadLimit size: 8192________________
     * read size after mark()/before reset(): 8192________________
     _____________________________________________________________________________________________________________

   submit
     _____________________________________________________________________________________________________________


And on the next page I choose the first send (#readLine test):

   request#getReader()#readLine test
   abcdefghijklmnopqrst send
   request#getReader()#read() test
   abcdefghijklmnopqrst send
   request#getReader()#read(char[1]) test
   abcdefghijklmnopqrst send

The result is:

request#getReader test.

   readLine.jsp is called.
     _____________________________________________________________________________________________________________

   Content-Type:multipart/form-data; boundary=xnyLAaB03X
   Character Encoding:UTF-8
   Content-Length:17115
   expected:17076
   real read:4827
   isSame:false

Comment 38 Rainer Jung 2008-04-10 11:25:43 UTC
Since most people are deeper in the code than I, here's what i get if I instrument o.a.c.connector.InputBuffer with a couple of log statements and post the file with increased maxHttpHeaderSize. So markPos get set to -1 in realWriteChars:

10.04.2008 20:25:17 org.apache.catalina.connector.InputBuffer mark
INFO: Set markPos in mark from -1 to 0
10.04.2008 20:25:17 org.apache.catalina.connector.InputBuffer realReadChars
INFO: Found markPos in realReadChars 0
10.04.2008 20:25:17 org.apache.catalina.connector.InputBuffer realWriteChars
INFO: Reset markPos in realWriteChars from 0 to -1
10.04.2008 20:25:17 org.apache.catalina.connector.InputBuffer reset
INFO: Found =-1 in reset for CHAR_STATE
10.04.2008 20:25:17 org.apache.catalina.connector.InputBuffer reset
INFO: Reset markPos in reset/CHAR_STATE to -1
java.io.IOException: markPos=-1
Comment 39 Rainer Jung 2008-04-10 11:41:32 UTC
And the stack in realWriteChars is:

        at org.apache.catalina.connector.InputBuffer.realWriteChars(InputBuffer.
java:335)
        at org.apache.tomcat.util.buf.CharChunk.flushBuffer(CharChunk.java:440)
        at org.apache.tomcat.util.buf.CharChunk.append(CharChunk.java:295)
        at org.apache.tomcat.util.buf.B2CConverter.convert(B2CConverter.java:100
)
        at org.apache.catalina.connector.InputBuffer.realReadChars(InputBuffer.j
ava:371)
        at org.apache.tomcat.util.buf.CharChunk.substract(CharChunk.java:416)
        at org.apache.catalina.connector.InputBuffer.read(InputBuffer.java:405)
        at org.apache.catalina.connector.CoyoteReader.read(CoyoteReader.java:105
)
        at org.apache.catalina.connector.CoyoteReader.readLine(CoyoteReader.java
:154)
        at org.apache.jsp.maxpost_jsp._jspService(maxpost_jsp.java:64)
        at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
        at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper
.java:369)
        at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:3
37)
        at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:266)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(Appl
icationFilterChain.java:290)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationF
ilterChain.java:206)
        at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperV
alve.java:233)
        at org.apache.catalina.core.StandardContextValve.invoke(StandardContextV
alve.java:175)
        at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.j
ava:128)
        at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.j
ava:102)
        at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineVal
ve.java:109)
        at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.jav
a:286)
        at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java
:844)
        at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.proce
ss(Http11Protocol.java:583)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:44
7)
        at java.lang.Thread.run(Thread.java:595)

Comment 40 Filip Hanik 2008-04-10 14:56:29 UTC
I think I might have found the problem,

I've been unable to reproduce the error using the NIO/APR (APR below) connector with setting
    <Connector port="8080" 
               protocol="org.apache.coyote.http11.Http11NioProtocol" 
               connectionTimeout="20000" 
               redirectPort="8443" 
               maxHttpHeaderSize="16384" 
               maxKeepAliveRequests="100"/>


I reproduce the error everytime using the regular connector
    <Connector port="8081" 
               protocol="org.apache.coyote.http11.Http11Protocol" 
               connectionTimeout="20000" 
               redirectPort="8443" 
               maxHttpHeaderSize="16384" 
               maxKeepAliveRequests="100"/>

and I am unable to reproduce it using the APR connector
    <Connector port="8082" 
               protocol="org.apache.coyote.http11.Http11AprProtocol" 
               connectionTimeout="20000" 
               redirectPort="8443" 
               maxHttpHeaderSize="16384" 
               maxKeepAliveRequests="100"/>

so I started debugging it, 
The problem is in org.apache.tomcat.util.CharChunk
in two subsequent reads, where the inputbuffer is 16384, it tries to use the CharChunk to store all the data. The reason it works with the APR/NIO connector is cause those connectors never pull out more than 8192 bytes from the socket buffer. 

however, the regular BIO connector will read as much as it can, and then the B2CConverter tries to append the character array to the CharChunk,
but the CharChunk refuses to grow beyond the limit.

The fix is simple
Index: java/org/apache/tomcat/util/buf/CharChunk.java
===================================================================
--- java/org/apache/tomcat/util/buf/CharChunk.java      (revision 646950)
+++ java/org/apache/tomcat/util/buf/CharChunk.java      (working copy)
@@ -454,7 +454,8 @@
        // Can't grow above the limit
        if( limit > 0 &&
            desiredSize > limit) {
-           desiredSize=limit;
+           limit = desiredSize;
+           //desiredSize=limit;
        }

        if( buff==null ) {

however, we could probably shrink the CharChunk back on a recycle, but it wont grow beyond maxHttpHeaderSize, so I am not sure we need to

Comment 41 Rainer Jung 2008-04-10 15:15:06 UTC
Hi Filip,

I didn't yet read your comment, because we both worked in parallel (mid-air collission). I'm posting my finding nevertheless, so we can look for the best solution.

This gets funny: while looking at the code I realized, that the 5.5.x/6.0.x patch that introduced the problem (r569969 resp. r572498) actually removed 3 lines of code, that I added in r371804 (resp. Bill added my patch).

https://issues.apache.org/bugzilla/show_bug.cgi?id=38346

If I add those lines back, then I can not reproduce the problem with or without maxHttpHeaderSize. I tried my POST test and also tried some fo the numerous possibilities with the test war. So I think the following patch for trunk is worth some tests:

http://people.apache.org/~rjung/patches/maxhttpheadersize.patch
Comment 42 Filip Hanik 2008-04-10 15:20:00 UTC
If we don't want the simple fix of growing the CharChunk, (since CharChunk will always be <=maxHttpHeaderSize, a character is at least one byte)

then the code would have to back out during the conversion, but I think it's not worth the time, this code is fragile, and has proven to break easily during larger fixes and refactoring of APIs.

Comment 43 Filip Hanik 2008-04-10 15:21:07 UTC
hi Rainer, both patches do the exact same thing, one explicit one implicit, I'd vote +1 for either or

Filip
Comment 44 Rainer Jung 2008-04-10 15:29:58 UTC
Next mid-air, BZ is obviously not IM :)

When I use the test war, the garbage test still seems to fail, even with
default maxHttpHeaderSize. The behaviour is the same with my patch, with
Filip's, with both of them applied and with unchanged trunk.

It might be, that I don't understand how to use this test case. I choose the
garbage radio button with default values, submit and the choose send. The
resulting page says "correct:true", but then asks me to Retry the request. If I
do that, I get a correct:false.
Comment 45 Suzuki Yuichiro 2008-04-10 21:26:03 UTC
Hi,

The behavior of the test case with default values is as follows. 

1. It posts 8192 multibytes characters.
2. The posted data is read up to 4096 characters.
   (And other information like the multipart-boundary characters
    and the Content-Disposition is read, too)
3. It checks whether the read data is correct or not.
4. They are repeated by retrying.

If "correct:false" is output by retrying,
perhaps there were garbage bytes of the multibyte character
in the buffer of the ReadConvertor object, after the previous request.

When the end of the read byte sequence is not complete
as the multibyte character, InputStreamReader seems to keep imperfect byte
sequence internally until the under-lying stream returns -1. 

And the imperfect byte sequence will be used as the start bytes
of the next request.

Please see the recycle method part of the attached patch.

Yuichiro
Comment 46 Mark Thomas 2008-04-11 14:06:49 UTC
OK. That light up ahead might actually be the end of the tunnel rather than yet another train.

I believe that all necessary patches to fix all issues identified by the test case have been:
- applied to trunk
- proposed for 6.0.x
- proposed for 5.5.x

Please comment here if you think something has been missed.

Many thanks to Suzuki Yuichiro whose test case has proved invaluable in fixing this issue.
Comment 47 Suzuki Yuichiro 2008-04-18 21:46:52 UTC
I tried current 6.0.x. It looks working fine at all test cases.
Thank you very much for the fix.

But there is a few comments.
1. readAheadLimit value that specified by the mark method has no meanings.
   After the mark method is called once, tomcat seems to buffer all of the body.
   The JavaDoc of BufferedReader allows tomcat to use arbitrary size
   (over readAheadLimit) buffer, but that may cause OutOfMemory
   however the application developer specified a moderate size for readAheadLimit.

2. At the recycle phase, ReadConverter convert all byte sequence
   that was not read by application,  but the converted character sequence
   is not used.
   In recycling, it's more efficient that the under-lying input stream
   returns -1 immediately. (Though it might not be the most efficient solution.)

They may be not bugs, but improvement level.

Best Regards.
Comment 48 Suzuki Yuichiro 2008-04-18 21:51:32 UTC
Created attachment 21832 [details]
patch that make readAheadLimit value effective

patch for tomcat6.0.x

This will make the readAheadLimit value effective.
Comment 49 Suzuki Yuichiro 2008-04-20 17:43:44 UTC
Oops, I forgot to reset InputBuffer.readAheadLimit in the recycle method
of the patch, though the result may be O.K. with or without it.
If you use the patch, please add it for the better code.
Comment 50 Rainer Jung 2008-04-28 11:47:29 UTC
Created attachment 21872 [details]
Patch for TC 5.5 Part 1 - connectors (CharChunk, B2CConverter)

Backport of patches applied to TC 6.0 for fixing BZ 44494.
Comment 51 Rainer Jung 2008-04-28 11:48:39 UTC
Created attachment 21873 [details]
 Patch for TC 5.5 Part 2 - container (InputBuffer)   

Backport of patches applied to TC 6.0 for fixing BZ 44494.
Comment 52 Rainer Jung 2008-04-28 11:53:54 UTC
I ported the patches for 44494 applied until now to TC 6.0 back to TC 5.5.

I reviewed all differences for the classes in o.a.tomcat.util.buf and o.a.c.connector for any side effects, since the classes drifted further apart between TC 5.5 and TC 6.0 than necessary. Some cleanups have been done in the older branch 5.5 and not in the newer ones.

The patch looks good to me (w.r.t. the test war attached here).

As soon as svn will be back to read/write, I'll add the patches to the TC 5.5 STATUS file.

Further tests and comments welcome. The patched classes are compatible with 5.5 trunk and 5.5.26.
Comment 53 Andrew Savory 2008-05-13 04:31:05 UTC
Since this bug effectively means tomcat 6.0.16 is unable to reliably handle requests, perhaps it should be publicised more widely, and the 6.0.16 release pulled (or replaced with 6.0.16.1)?
Comment 54 Paul Ryan 2008-07-01 11:49:57 UTC
I hate to make a post like this but as it has not been decided, is there a plan for releasing this as it has caught a number of teams off guard? I agree with the poster from 53 that this is a major issue as requests can't be trusted. Please update the homepage with this issue and pull 6.0.16 and 5.5.26 from public release at the very least.

Thanks for the fix and I look forward to the changed release.

(In reply to comment #53)
> Since this bug effectively means tomcat 6.0.16 is unable to reliably handle
> requests, perhaps it should be publicised more widely, and the 6.0.16 release
> pulled (or replaced with 6.0.16.1)?
> 

Comment 55 Mark Thomas 2008-07-09 14:32:20 UTC
*** Bug 45350 has been marked as a duplicate of this bug. ***
Comment 56 Mark Thomas 2008-07-09 14:32:46 UTC
*** Bug 45349 has been marked as a duplicate of this bug. ***
Comment 57 Mark Thomas 2008-08-14 01:31:33 UTC
This has now been fixed in 5.5.x and will be included in 5.5.27 onwards.
Comment 58 Mark Thomas 2008-08-15 00:23:15 UTC
Just for the record, the 5.5.x fixes to the connector module were sufficient to fix this issue for 4.1.x as well.
Comment 59 Mark Thomas 2008-09-24 13:25:29 UTC
*** Bug 45215 has been marked as a duplicate of this bug. ***