Created attachment 34445 [details] A sample of journalctl output With HTTP/2 enabled many similar exceptions frequently appear in journal. SEVERE: Servlet.service() for servlet [default] in context with path [] threw exception java.nio.BufferOverflowException at java.nio.HeapByteBuffer.put(HeapByteBuffer.java:189) SEVERE: Error processing request java.nio.BufferOverflowException at java.nio.HeapByteBuffer.put(HeapByteBuffer.java:189) SEVERE: Error finishing response java.nio.BufferOverflowException at java.nio.HeapByteBuffer.put(HeapByteBuffer.java:189) See attachment for full stack traces. There are a few of HTTP and HTTPS connectors with IPv4 and IPv6 addresses. HTTPS connectors configured like this: <Connector address="..." port="443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" connectionTimeout="20000" compression="on" compressableMimeType="text/html,application/xml,text/css,application/javascript,text/plain,application/json"> <UpgradeProtocol className="org.apache.coyote.http2.Http2Protocol" /> <SSLHostConfig> <Certificate certificateFile="..." certificateKeyFile="..." certificateChainFile="..." </SSLHostConfig> </Connector> Without <UpgradeProtocol> exceptions do not appear. Server is running Tomcat 8.5.8 + tcnative 1.2.10 + openSSL 1.0.2h. I cannot reproduce this on testing server with the same configuration by my own requests.
This is a bad beug report. The stacktrace indicates the issue occurs with HTTP/1.1, HTTP/2 is not affected. The headers sent back seem to be too large, but I don't really see why. It could be a state issue with the new ByteBuffer based code, maybe related to upgrade. Does this also occurs on 8.5.6 ?
You're right, now I see similar exceptions without HTTP/2 enabled. But it took many hours after server's restart to see them. Triples of exceptions appear with rate about 3000 per hour with random intervals between them. They disappears again after restart with the same settings. Unfortunately, old journal data is already erased and I cannot definitely say is this bug present in previous versions, but I remember that I saw strange exceptions some time ago with different versions from 8.5 line after relatively long uptimes of Tomcat Server. But I've just downgraded Tomcat to version 8.5.6 to check it. It takes some time, maybe some days. It seems that long uptime and heavy load required to reveal this bug.
It seems that check in Http11OutputBuffer.checkLengthBeforeWrite() is not correct. It compares with ByteBuffer.capacity(), but ByteBuffer.limit() should be used actually.
Hi, (In reply to Evgenij Ryazanov from comment #3) > It seems that check in Http11OutputBuffer.checkLengthBeforeWrite() is not > correct. > > It compares with ByteBuffer.capacity(), but ByteBuffer.limit() should be > used actually. The check is correct. If you take a look to the code again you will see that the ByteBuffer is kept in "writing" mode i.e. the limit is the capacity. I'll do a code review in order to try to find the reason for this exception. Regards, Violeta
Oh, I see. But exception in the middle of commit() method can prevent proper reinitialization. @@ -347,6 +347,8 @@ if (headerBuffer.position() > 0) { // Sending the response header buffer headerBuffer.flip(); + // Random I/O exception + if (Math.random() < 0.1) + throw new IOException(); socketWrapper.write(isBlocking(), headerBuffer); headerBuffer.position(0).limit(headerBuffer.capacity()); } @@ -493,6 +495,8 @@ * requested number of bytes. */ private void checkLengthBeforeWrite(int length) { + if (headerBuffer.capacity() != headerBuffer.limit()) + System.err.printf("%d != %d%n", headerBuffer.capacity(), headerBuffer.limit()); // "+ 4": BZ 57509. Reserve space for CR/LF/COLON/SP characters that // are put directly into the buffer following this write operation. if (headerBuffer.position() + length + 4 > headerBuffer.capacity()) {
nextRequest() sets position back to 0, but leaves limit as is.
(In reply to Evgenij Ryazanov from comment #5) > Oh, I see. But exception in the middle of commit() method can prevent proper > reinitialization. > > @@ -347,6 +347,8 @@ > if (headerBuffer.position() > 0) { > // Sending the response header buffer > headerBuffer.flip(); > + // Random I/O exception > + if (Math.random() < 0.1) > + throw new IOException(); > socketWrapper.write(isBlocking(), headerBuffer); Yes that's the problem that I also saw while inspecting the code. IOException can be thrown while writing the response headers to the socket. Here [1] is the fix. Do you think you can test it? Thanks, Violeta [1] http://svn.apache.org/viewvc?view=revision&revision=1769976
Yes, but it will take a couple of days I guess.
Ok, IMO a new 8.5 is definitely needed.
(In reply to Evgenij Ryazanov from comment #8) > Yes, but it will take a couple of days I guess. Here [1] I published a snapshot that contains the fix if you need it. It will be great if you can test it. Thanks, Violeta [1] https://repository.apache.org/content/repositories/snapshots/org/apache/tomcat/tomcat/8.5-SNAPSHOT/
Server is now running tomcat-8.5-20161116.160434-5.tar.gz. I will check it for exceptons later.
Seems to be fixed. No new exceptions in journal and no random ERR_EMPTY_RESPONSE errors in clients any more after reasonable long uptime under heavy load.
Hi, Thanks for testing the fix. It will be available in 9.0.0.M14 and 8.5.9 onwards Regards, Violeta
I was thinking maybe the state could be reinitialized when "creating" the output buffer, but the try/finally should be cheap and better.
*** Bug 60433 has been marked as a duplicate of this bug. ***
*** Bug 60455 has been marked as a duplicate of this bug. ***
dear support when can I expect the patch to be included in tomcat 8.5 or tomcat 9 release? I have not found it in changelog of latest tomcat 8.5 Thank you, Jan
8.5.9 is being voted at the moment. If everything is OK it will be available in the next few days.