I'm using the standard servlet 3.0 async APIs with tomcat 7.0.22. The server side code is something like the following: final AsyncContext asyncContext = request.startAsync(); ... // in a processing thread asyncContext.getRequest().getInputStream(); ... read the input stream asyncContext.complete(); The InputStream returns 0 bytes for the HTTP post with chunking. After debugging, I found that the ChunkedInputFilter is reused by org.apache.coyote.http11.AbstractInputBuffer. But it has never been recycled (nextRequest()?) before the reuse for another request. As a result, the endChunk flag is always true after the first request. And it always return immediately without reading more from the buffer.
This is an application error, not a Tomcat bug. I have confirmed that this works as expected. The users mailing list is the place to seek further help.
Created attachment 27821 [details] Test case This the war file that contains a simple servlet filter that uses servelet 3.0 async api to echo the posted content. package test; import java.io.IOException; import java.io.Reader; import javax.servlet.AsyncContext; import javax.servlet.Filter; import javax.servlet.FilterChain; import javax.servlet.FilterConfig; import javax.servlet.ServletException; import javax.servlet.ServletRequest; import javax.servlet.ServletResponse; import javax.servlet.http.HttpServletRequest; public class TestAsyncFilter implements Filter { @Override public void destroy() { } @Override public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException { HttpServletRequest httpServletRequest = (HttpServletRequest)request; String path = httpServletRequest.getServletPath(); if ("POST".equalsIgnoreCase(httpServletRequest.getMethod()) && path.startsWith("/test")) { final AsyncContext asyncContext = request.startAsync(); asyncContext.start(new Runnable() { @Override public void run() { try { readContent(asyncContext); } catch (IOException e) { e.printStackTrace(); } } }); } else { chain.doFilter(request, response); } } private void readContent(AsyncContext asyncContext) throws IOException { HttpServletRequest httpServletRequest = (HttpServletRequest)asyncContext.getRequest(); Reader reader = httpServletRequest.getReader(); StringBuilder sb = new StringBuilder(); char[] buf = new char[4096]; while (true) { int len = reader.read(buf); if (len < 0) { break; } else { sb.append(buf, 0, len); } } System.out.println("Request: " + sb.toString()); asyncContext.getResponse().getWriter().println(sb.toString()); asyncContext.complete(); } @Override public void init(FilterConfig config) throws ServletException { } } And the test client is: package test; import java.io.IOException; import org.apache.http.HttpResponse; import org.apache.http.client.HttpClient; import org.apache.http.client.methods.HttpPost; import org.apache.http.entity.StringEntity; import org.apache.http.impl.client.DefaultHttpClient; import org.apache.http.util.EntityUtils; public class TestHttpClient { public static void main(String[] args) throws IOException { HttpClient client = new DefaultHttpClient(); for (int i = 0; i < 5; i++) { HttpPost post = new HttpPost("http://localhost:8080/tomcat7-async/test"); StringEntity entity = new StringEntity("Test String: " + i); entity.setChunked(true); post.setEntity(entity); HttpResponse response = client.execute(post); System.out.println("Response[" + i + "]: " + EntityUtils.toString(response.getEntity())); } } }
I would like to reopen the bug with the test case to prove I was wrong :-). Thanks, Raymond
To reproduce the problem, deploy the WAR and run the test.TestHttpClient to send chunked post. BTW, the same application works well under Jetty 8.x. I also tried to convert the filter to a async servlet and the same behavior happened. Only the first request was processed correctly and other ones received empty content.
Yep, I can reproduce this now with the additional information. Previously, it looked like the standard getParameter() confusion.
Thanks for the test case. That made tracking this down a lot easier than it could have been. The problem has been fixed in trunk (8.0.x) and 7.0.x and will be included in 7.0.23 onwards.
I am still facing this issue. My test cases are failing randomly when I run my application with <async-supported>true</async-supported>. Everything work fine when client is not using chunked encoding. Also, everything work fine when run with <async-supported>false</async-supported> I have debugged the problem and this bug report helped me in finding root cause. I found that ChunkedInputFilter is getting reused even when keepalive=false. And my test cases start failing after that point. So, we need to recycle it in that case. Issue was fixed by modifying the AbstractHttp11Processor file. Replaced lines #1538 to #1544 if (!keepAlive) { return SocketState.CLOSED; } else { getInputBuffer().nextRequest(); getOutputBuffer().nextRequest(); return SocketState.OPEN; } With getInputBuffer().nextRequest(); if (!keepAlive) { return SocketState.CLOSED; } else { getOutputBuffer().nextRequest(); return SocketState.OPEN; } I was testing against 7.0.26. Will test with trunk and submit patch if required. We can not recycle output buffer at this stage as it has not completed flushing data. Let me know if there is a better place to handle this. Will test it and then post a patch. I don't have a simple test case to reproduce the issue. Let me know what else I can do to help you fix this issue. Thanks, Sudhan
Thanks for the research. That pointed me in the right direction. This was fixed in r1334787 for trunk and r1334790 for 7.0.x and will be included in 7.0.28 onwards. If you are able to confirm that the fix works, that would be great.
Yes, It is fixed. I have tested in 7.0.x. It is working fine. I was getting some NPE's in InternalNioInputBuffer with my fix. That's why did not post patch. Not getting any exception with latest code from 7.0.x. Thanks, Sudhan
Got this issue again. :( Earlier I was running with protocol="HTTP/1.1" This time I got error with protocol="org.apache.coyote.http11.Http11NioProtocol" Shall I reopen? I was running single client. That is one request at a time. Got exception in client and client closed socket. Got following exception is server. java.io.EOFException: Unexpected EOF read on the socket at org.apache.coyote.http11.InternalNioInputBuffer.readSocket(InternalNioInputBuffer.java:455) at org.apache.coyote.http11.InternalNioInputBuffer.fill(InternalNioInputBuffer.java:808) at org.apache.coyote.http11.InternalNioInputBuffer$SocketInputBuffer.doRead(InternalNioInputBuffer.java:833) at org.apache.coyote.http11.filters.ChunkedInputFilter.readBytes(ChunkedInputFilter.java:271) at org.apache.coyote.http11.filters.ChunkedInputFilter.parseCRLF(ChunkedInputFilter.java:355) at org.apache.coyote.http11.filters.ChunkedInputFilter.doRead(ChunkedInputFilter.java:147) at org.apache.coyote.http11.AbstractInputBuffer.doRead(AbstractInputBuffer.java:346) at org.apache.coyote.Request.doRead(Request.java:422) at org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:290) at org.apache.tomcat.util.buf.ByteChunk.substract(ByteChunk.java:431) at org.apache.catalina.connector.InputBuffer.read(InputBuffer.java:315) at org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream.java:200) This is expected. But looks like filters were not recycled before next request. Next request failed with java.io.IOException: Invalid CRLF at org.apache.coyote.http11.filters.ChunkedInputFilter.parseCRLF(ChunkedInputFilter.java:366) at org.apache.coyote.http11.filters.ChunkedInputFilter.doRead(ChunkedInputFilter.java:147) at org.apache.coyote.http11.AbstractInputBuffer.doRead(AbstractInputBuffer.java:346) at org.apache.coyote.Request.doRead(Request.java:422) at org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:290) at org.apache.tomcat.util.buf.ByteChunk.substract(ByteChunk.java:431) at org.apache.catalina.connector.InputBuffer.read(InputBuffer.java:315) at org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream.java:200)
Re-open to investigate
Full stack traces please.
Rest of the stack trace is my application code. I am starting async processing with if (request.isAsyncSupported()) { final AsyncContext asyncCtx = request.startAsync(request, response); final Runnable command = new AsyncRequestProcessor(asyncCtx, this, this.actionObjects); Executor executor = (Executor)request.getServletContext().getAttribute(Constants.ASYNC_REQUEST_EXECUTOR); executor.execute(command); } Next line on stack trace is ServletUtils.getBytesOfStream(ServletUtils.java:425) Code of same is public static ByteArrayInputStreamExt getBytesOfStream(final Map<String, Object> helperMap, final InputStream stream, int len, boolean chunked) throws IOException { try { ByteArrayInputStreamExt bais = (ByteArrayInputStreamExt) helperMap.get(Constants.DATA); byte[] tempBuf = null; if (bais == null) { tempBuf = new byte[len]; bais = new ByteArrayInputStreamExt(tempBuf); } else { tempBuf = bais.getBuffer(); } int readSoFar = 0; int bytesRead = 0; while (len > readSoFar) { bytesRead = stream.read(tempBuf, readSoFar, (len - readSoFar)); if (bytesRead == -1) { if (chunked) { if (readSoFar == 0) { bais.reset(0); return bais; } else { bais.reset(readSoFar); return bais; } } throw new IOException("Failed to read from inputstream"); } readSoFar += bytesRead; } bais.reset(len); return bais; } catch (OutOfMemoryError th) { logger.error("Creating a byte array of length: " + len + " has resulted in OOM error: " + th.getMessage()); } return null; } And this method is called with ServletUtils.getBytesOfStream(request.getInputStream(), length); This method is called in a loop reading 10MB at a time. For chunked, loop stops with EOF. The ByteArrayInputStreamExt handling is for reusing the byte[]. This happens only when client aborts. Let me know if you need anymore info. Which class should I check if I have to debug this? Thanks, Sudhan
I think the issue is because of ChunkedInputFilter.needCRLFParse not being reset. I think we need to do needCRLFParse = false; in ChunkedInputFilter.recycle().
That looks like the cause to me to. This has been fixed in trunk and 7.0.x and will be included in 7.0.28 onwards.
(In reply to comment #14) Thank you. Fixed this in 6.0 as well, will be in 6.0.36.
(In reply to comment #14) The needCRLFParse fix was backported to 5.5 in r1359748 and will be in 5.5.36.