Bug 63690 - [HTTP/2] The socket [*] associated with this connection has been closed.
Summary: [HTTP/2] The socket [*] associated with this connection has been closed.
Status: RESOLVED FIXED
Alias: None
Product: Tomcat 9
Classification: Unclassified
Component: Connectors (show other bugs)
Version: 9.0.24
Hardware: PC Linux
: P2 normal (vote)
Target Milestone: -----
Assignee: Tomcat Developers Mailing List
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2019-08-23 15:36 UTC by Boris Petrov
Modified: 2023-09-05 08:26 UTC (History)
1 user (show)



Attachments
Log (82.23 KB, text/plain)
2019-08-23 20:40 UTC, Boris Petrov
Details
Dump of the POST request (497.84 KB, text/plain)
2019-08-25 19:19 UTC, Boris Petrov
Details
Simple project demonstrating multipart issue (19.44 KB, application/x-zip-compressed)
2019-08-29 12:12 UTC, Chen Levy
Details

Note You need to log in before you can comment on or make changes to this bug.
Description Boris Petrov 2019-08-23 15:36:43 UTC
Tomcat is version 9.0.24, APR 1.7.0, Tomcat Native 1.2.23.

When using HTTP/2, doing a multipart post request from any browser causes this exception 99% of the cases:

~~~
javax.ws.rs.ProcessingException: Failed to buffer the message content input stream.
        at org.glassfish.jersey.message.internal.InboundMessageContext.bufferEntity(InboundMessageContext.java:907)
        at user-code(SourceFile:58)
        ...
Caused by: org.apache.catalina.connector.ClientAbortException: java.io.IOException: The socket [140,613,069,382,992] associated with this connection has been closed.
        at org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:340)
        at org.apache.catalina.connector.InputBuffer.checkByteBufferEof(InputBuffer.java:632)
        at org.apache.catalina.connector.InputBuffer.read(InputBuffer.java:362)
        at org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream.java:132)
        at org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream.java:110)
        at org.glassfish.jersey.message.internal.ReaderWriter.writeTo(ReaderWriter.java:92)
        at org.glassfish.jersey.message.internal.InboundMessageContext.bufferEntity(InboundMessageContext.java:894)
        ... 52 common frames omitted
Caused by: java.io.IOException: The socket [140,613,069,382,992] associated with this connection has been closed.
        at org.apache.tomcat.util.net.AprEndpoint$AprSocketWrapper.doWrite(AprEndpoint.java:2315)
        at org.apache.tomcat.util.net.SocketWrapperBase.doWrite(SocketWrapperBase.java:793)
        at org.apache.tomcat.util.net.SocketWrapperBase.writeBlocking(SocketWrapperBase.java:529)
        at org.apache.tomcat.util.net.SocketWrapperBase.write(SocketWrapperBase.java:454)
        at org.apache.coyote.http2.Http2UpgradeHandler.writeWindowUpdate(Http2UpgradeHandler.java:770)
        at org.apache.coyote.http2.Stream$StreamInputBuffer.doRead(Stream.java:1122)
        at org.apache.coyote.Request.doRead(Request.java:551)
        at org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:336)
        ... 58 common frames omitted
~~~

Switching back to HTTP/1.1 by commenting out `<UpgradeProtocol className="org.apache.coyote.http2.Http2Protocol"/>` in `conf/server.xml` fixes the issue.

I've no idea what could I have done wrong. GET requests are working fine as well as not-multipart post ones (or perhaps they are not so big and that's why they work). Not sure if that matters.

Please tell me how can I help to debug the issue. Thanks!
Comment 1 Mark Thomas 2019-08-23 19:08:49 UTC
Enable debug logging for org.apache.coyote.http2 and provide the output for a single, failed request.
Comment 2 Boris Petrov 2019-08-23 20:40:47 UTC
Created attachment 36734 [details]
Log

This is a dump of the logging for `org.apache.coyote.http2`.
Comment 3 Mark Thomas 2019-08-24 08:20:20 UTC
The request has triggered the overhead protection because it looks abusive (small non-final DATA frame). Setting overheadDataThreadhold to zero will disable the specific protection triggered.
A debug trace with that disabled would be interesting to see how many frames like that the client is producing. It would also be worth looking into why the client is behaving that way.
Comment 4 Boris Petrov 2019-08-24 08:31:15 UTC
Thank you for the support, Mark.

As I said, this happens with both the latest Chrome and Firefox, as well as Firefox 47 (these are the three browsers I tested with). Only when using HTTP/2. The client side is making a POST request to upload a file. The library that we use is "jQuery File Upload" (https://github.com/blueimp/jQuery-File-Upload). We split the file in 1 MB chunks with that library. In 99% of the cases the first request fails. Does that give you any ideas?

I'll try setting `overheadDataThreadhold` to 0 on Monday and post the findings. You need the same logging as I already provided, just with that setting, correct?

Thanks again!
Comment 5 Mark Thomas 2019-08-24 14:48:40 UTC
Correct. Tx.
Comment 6 Boris Petrov 2019-08-25 19:19:15 UTC
Created attachment 36736 [details]
Dump of the POST request

This is a dump of the POST request for uploading a file (there should be 2 POST multiform requests because the file was 1.5 MB) after disabling the `overheadDataThreadhold` limit. The upload works in that case by the way.

Anything you can figure out from the dump?
Comment 7 Mark Thomas 2019-08-27 06:36:36 UTC
Take a look at the following lines in the log:

Connection [1], Stream [7], Frame type [DATA], Flags [0], Payload size [...]

It looks like there is some buffering going on.

The first 6 data frames are 5x2852 bytes and 1x2124 bytes for a total of exactly 16k.

The first 24 packets are 20x2852, 3x2124 and 1x2123 for a total of 64k-1. The 25th packet is 1 byte giving a total of 64k.

A similar (but not completely identical pattern) follows for the rest of the upload. It looks like the library you are using has various internal buffers and what you are seeing (in terms of data packet size) is the result of interactions between those buffers (I'm assuming there is no HTTP/2 proxy between the client and Tomcat else things will get more complicated).

Small HTTP/2 packets are inefficient. Lots of them are considered to be abusive and in some servers (not Tomcat) result in a DoS. Tomcat has expanded its overhead protection to protect against such abusive traffic. The default settings considers any non-final DATA frame of less than 1024 bytes abusive. The smaller the DATA frame, the more abusive it is considered.

I'd recommend opening an issue against the library you are using as it could be argued it should be sending fewer, larger HTTP/2 frames.

It could also be argued that Tomcat should use a lower overheadDataThreadhold. However, the counter argument is that a lower threshold is only required for inefficient clients. Where inefficient becomes abusive is an interesting question and the answer will vary from server to server. As I said, in Tomcat's case it is never abusive, only inefficient, but we want to encourage clients to be efficient.

I'm leaning towards leaving the default as is for now but is is definitely something we should keep an eye on as more users pick up the latest 9.0.x and 8.5.x releases. If we see a lot of issues like this then we may need to review the default. I'll leave this open for now but I am leaning towards resolving it as some form of "not a Tomcat issue".
Comment 8 Boris Petrov 2019-08-27 06:48:16 UTC
Hi, thanks for the detailed answer.

There is no intermediate HTTP/2 proxy.

Before I open an issue somewhere, could you please explain me something. I'm not sure I fully understand what's going on but how can a JavaScript library manage the HTTP/2 frames at all? As I said above, we're using "jQuery File Upload" (https://github.com/blueimp/jQuery-File-Upload) which splits the file in 1 MB chunks. Then, I guess, they do the POST request. Isn't then splitting that request with its body a Chrome/Firefox responsibility? If by "client" you mean Chrome/Firefox... is it possible that both of them are so inefficient/not-clever? If you mean a JavaScript library - then I probably am missing something. Some insight would be appreciated. Thanks!
Comment 9 Christopher Schultz 2019-08-28 03:10:06 UTC
(In reply to Mark Thomas from comment #7)
> Small HTTP/2 packets are inefficient. Lots of them are considered to be
> abusive and in some servers (not Tomcat) result in a DoS. Tomcat has
> expanded its overhead protection to protect against such abusive traffic.
> The default settings considers any non-final DATA frame of less than 1024
> bytes abusive. The smaller the DATA frame, the more abusive it is considered.

1024 might be too high for a default, but the good news is that the "abusive" threshold can be changed (right?).

Imagine an endpoint that is supposed to receive messages from a smart phone tracking a user's geographical location. Let's think about the kinds of packets you'd maybe expect to get. Let's assume JSON for a moment and that there isn't a huge amount of other BS in the application: it's just doing what you'd expect. An update message might look like this:

{
  "MessageType" : "LOC-Update",
  "Timestamp" : "2019-08-27T23:02:00Z",
  "Latitude" : 51.508107,
  "Longitude" : -0.075938
}

Including the trailing newline, that message is a mere 128 bytes. Imagine sending one of those messages per second per client (which is pretty chatty, but hey there are lots of crappy mobile apps out there, aren't there). If I were designing such a service, I would even arrange to have the messages be even smaller. There's no need to have such verbose JSON. Property names could be changed, or, if the data format is relatively simple and/or fixed, a JSON object could be converted into a JSON array and the property names are removed entirely. The message could be as short as:

["2019-08-27T23:02:00Z",51.508107,-0.075938]

That's a scant 44 bytes.

Not every application will be sending large documents around.
Comment 10 Mark Thomas 2019-08-28 07:09:19 UTC
(In reply to Boris Petrov from comment #8)
> Hi, thanks for the detailed answer.
> 
> There is no intermediate HTTP/2 proxy.
> 
> Before I open an issue somewhere, could you please explain me something. I'm
> not sure I fully understand what's going on but how can a JavaScript library
> manage the HTTP/2 frames at all?

It will depend on the API it uses to pass data to the browser. For example, if the API offers the capability to a) control the write buffer size and b) flush writes then the client can - broadly - control the size of the DATA frames written. I'm not at all familiar with the API in use. What I would suggest is to test a simple POST with the same file and no Javascript library and see how that behaves.


(In reply to Christopher Schultz from comment #9)
> 1024 might be too high for a default, but the good news is that the
> "abusive" threshold can be changed (right?).

Right.

<snip/>

> That's a scant 44 bytes.
> 
> Not every application will be sending large documents around.

Which is why the threshold doesn't apply to DATA frames with the EOS (end of stream) flag set. Sending a small request body in a single DATA frame is fine even if the body is just a single byte. Sending lots of small (less than 1024 bytes by default) DATA frames when you could send one larger DATA frame is not.
Comment 11 Chen Levy 2019-08-28 18:49:01 UTC
I encountered a similar issue where multipart form submission resulted in none of the form parameters being visible from the servlet (no exception or error).
I created a small test project containing a single HTML file with a multipart form, and a single servlet.
No Java or JavaScript libraries are involved

Using the latest Firefox and Chrome I encounter the issue when uploading a 3MB file. The overheadDataThreadhold="0" setting seem to resolve it

I'd expect the default Tomcat distribution to allow these kind of activities without additional configuration

I can supply/attach additional information if needed
Thanks
Comment 12 Christopher Schultz 2019-08-28 19:16:23 UTC
(In reply to Mark Thomas from comment #10)
> Which is why the threshold doesn't apply to DATA frames with the EOS (end of
> stream) flag set. Sending a small request body in a single DATA frame is
> fine even if the body is just a single byte. Sending lots of small (less
> than 1024 bytes by default) DATA frames when you could send one larger DATA
> frame is not.

Aha, thanks for pointing out the difference.
Comment 13 Boris Petrov 2019-08-29 05:50:40 UTC
(In reply to Chen Levy from comment #11)
> I encountered a similar issue where multipart form submission resulted in
> none of the form parameters being visible from the servlet (no exception or
> error).
> I created a small test project containing a single HTML file with a
> multipart form, and a single servlet.
> No Java or JavaScript libraries are involved
> 
> Using the latest Firefox and Chrome I encounter the issue when uploading a
> 3MB file. The overheadDataThreadhold="0" setting seem to resolve it
> 
> I'd expect the default Tomcat distribution to allow these kind of activities
> without additional configuration
> 
> I can supply/attach additional information if needed
> Thanks

Chen Levy, if you could provide a simple sample project that, as you say, has no external dependencies and breaks with the default Tomcat configuration on the latest Chrome/Firefox, please do so that Tomcat's team could perhaps take a look and reevaluate the default settings.
Comment 14 Chen Levy 2019-08-29 12:12:44 UTC
Created attachment 36744 [details]
Simple project demonstrating multipart issue
Comment 15 Chen Levy 2019-08-29 12:22:02 UTC
(In reply to Boris Petrov from comment #13)
> Chen Levy, if you could provide a simple sample project that, as you say,
> has no external dependencies and breaks with the default Tomcat
> configuration on the latest Chrome/Firefox, please do so that Tomcat's team
> could perhaps take a look and reevaluate the default settings.

I've attached a simple project:
The issue is noticeable when filling the form fields, including the file upload, in which case the form fields are not accessible from the servlet.

The issue appears with Tomcat 9.0.24 but not with 9.0.21
The issue appears with HTTPS but not with HTTP
The issue appears when there's an upload file, but not without it

The Tomcat server has HTTP2 enabled
Comment 16 Mark Thomas 2019-09-04 18:17:46 UTC
The 2852*5, 2124, 2852*5, 2124, 2852*5, 2124, 2852*5, 2123, 1, etc pattern occurs (in Chrome at least) with a direct POST request with no libraries present. That points to Chrome being responsible for the 1 byte DATA frame. I'll try and reach out to a contact in the Chrome dev team.
Comment 17 Mark Thomas 2019-09-04 22:51:51 UTC
https://bugs.chromium.org/p/chromium/issues/detail?id=1000809

The root cause currently looks to be a combination of how Chrome's buffering interacts with flow control windows that do not have an initial size of n*16k and Tomcat's recently added dislike of small, inefficient DATA frames as potentially abusive.

While investigating this issue I have found that Tomcat needs to make some changes to ensure that non-default initial window sizes are communicated to the client as early as possible. I'll commit those after I have completed some more testing.

I'm also looking into modifying Tomcat's overhead protection so a single DATA frame with a 1 byte payload isn't seen as abusive.
Comment 18 Mark Thomas 2019-09-05 14:10:01 UTC
I've made a couple of changes in light of my research.

1. I have added some code to ensure that, if a larger than default initial window size is configured, then Tomcat will follow the initial SETTINGS frame (typically in the same TCP packet) with a WINDOW_UPDATE frame to increase the size of the flow control window for the connection.

2. I have switched to using the average size of the current and previous DATA and WINDOW_UPDATE frames to test against their respective thresholds. This allows some smaller frames (e.g. those caused by the buffering behaviour seen here) when surrounded by larger frames but will still close the connection quickly if lots of small frames are used.

These changes will are in:
- master for 9.0.25 onwards
- 8.5.x for 8.5.46 onwards

I'm still hopeful that Chrome will make some changes to reduce the number of small, non-final DATA frames it sends during an upload.
Comment 19 Boris Petrov 2019-09-25 12:11:38 UTC
Mark, I'm observing a similar behavior on 9.0.26. The socket is closed even when the `overheadDataThreadhold="0"` setting is set. Browser is Chromium 77.0.3865.90. I had to revert to 9.0.24 with that setting in order to make it work. Should I open a new issue or reopen this one perhaps?
Comment 20 Mark Thomas 2019-09-25 12:30:31 UTC
9.0.26 fixed a typo in the attribute name. You want overheadDataThreshold in 9.0.26 onwards.
Comment 21 Boris Petrov 2019-09-25 20:26:13 UTC
Thanks for the tip, Mark. I'll try that tomorrow. But... is this setting still needed? I thought that Tomcat 9.0.26 would remove the need for custom configuration and would work out-of-the-box with Chrome settings/bugs? Or you're waiting for them to be resolved?
Comment 22 Mark Thomas 2019-09-25 20:34:21 UTC
9.0.26 will handle the behaviour in the original trace. That doesn't mean there isn't more "unusual" behaviour that will trigger the issue.
Comment 23 Boris Petrov 2019-09-25 20:49:07 UTC
Well, I'm not sure what exactly you mean, but if you need more stacktraces, here you go:

```
javax.ws.rs.ProcessingException: Failed to buffer the message content input stream.
        at org.glassfish.jersey.message.internal.InboundMessageContext.bufferEntity(InboundMessageContext.java:907)
        ... user code
Caused by: org.apache.catalina.connector.ClientAbortException: java.io.IOException: Stream reset
        at org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:340)
        at org.apache.catalina.connector.InputBuffer.checkByteBufferEof(InputBuffer.java:632)
        at org.apache.catalina.connector.InputBuffer.read(InputBuffer.java:362)
        at org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream.java:132)
        at java.base/java.io.FilterInputStream.read(FilterInputStream.java:133)
        at java.base/java.io.PushbackInputStream.read(PushbackInputStream.java:183)
        at java.base/java.io.FilterInputStream.read(FilterInputStream.java:107)
        at org.glassfish.jersey.message.internal.ReaderWriter.writeTo(ReaderWriter.java:92)
        at org.glassfish.jersey.message.internal.InboundMessageContext.bufferEntity(InboundMessageContext.java:894)
        ... 52 common frames omitted
Caused by: java.io.IOException: Stream reset
        at org.apache.coyote.http2.Stream$StreamInputBuffer.doRead(Stream.java:1084)
        at org.apache.coyote.Request.doRead(Request.java:551)
        at org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:336)
        ... 60 common frames omitted
```

And this one:

```
javax.ws.rs.ProcessingException: Failed to buffer the message content input stream.
        at org.glassfish.jersey.message.internal.InboundMessageContext.bufferEntity(InboundMessageContext.java:907)
        ... user code
Caused by: org.apache.catalina.connector.ClientAbortException: java.io.IOException: The socket [140,630,383,004,352] associated with this connection has been closed.
        at org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:340)
        at org.apache.catalina.connector.InputBuffer.checkByteBufferEof(InputBuffer.java:632)
        at org.apache.catalina.connector.InputBuffer.read(InputBuffer.java:362)
        at org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream.java:132)
        at java.base/java.io.FilterInputStream.read(FilterInputStream.java:133)
        at java.base/java.io.PushbackInputStream.read(PushbackInputStream.java:183)
        at java.base/java.io.FilterInputStream.read(FilterInputStream.java:107)
        at org.glassfish.jersey.message.internal.ReaderWriter.writeTo(ReaderWriter.java:92)
        at org.glassfish.jersey.message.internal.InboundMessageContext.bufferEntity(InboundMessageContext.java:894)
        ... 52 common frames omitted
Caused by: java.io.IOException: The socket [140,630,383,004,352] associated with this connection has been closed.
        at org.apache.tomcat.util.net.AprEndpoint$AprSocketWrapper.doWrite(AprEndpoint.java:2315)
        at org.apache.tomcat.util.net.SocketWrapperBase.doWrite(SocketWrapperBase.java:793)
        at org.apache.tomcat.util.net.SocketWrapperBase.writeBlocking(SocketWrapperBase.java:529)
        at org.apache.tomcat.util.net.SocketWrapperBase.write(SocketWrapperBase.java:454)
        at org.apache.coyote.http2.Http2UpgradeHandler.writeWindowUpdate(Http2UpgradeHandler.java:808)
        at org.apache.coyote.http2.Stream$StreamInputBuffer.doRead(Stream.java:1127)
        at org.apache.coyote.Request.doRead(Request.java:551)
        at org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:336)
        ... 60 common frames omitted
```

Please tell me if you need more information or if you think this is "normal" and there is some issue in my configuration/setup.
Comment 24 Boris Petrov 2020-10-26 10:43:41 UTC
Mark, we're still seeing randomly this error even on the latest Tomcat (9.0.39) and with `overheadDataThreshold="0"`. Java version is 15 (but we've been seeing it on all previous too). Any suggestions what can we do?

javax.ws.rs.ProcessingException: Failed to buffer the message content input stream.
        at org.glassfish.jersey.message.internal.InboundMessageContext.bufferEntity(InboundMessageContext.java:942)
        at com.company.rest.filters.Filter.filter(SourceFile:58)
        at org.glassfish.jersey.server.ContainerFilteringStage.apply(ContainerFilteringStage.java:108)
        at org.glassfish.jersey.server.ContainerFilteringStage.apply(ContainerFilteringStage.java:44)
        at org.glassfish.jersey.process.internal.Stages.process(Stages.java:173)
        at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:247)
        at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)
        at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)
        at org.glassfish.jersey.internal.Errors.process(Errors.java:292)
        at org.glassfish.jersey.internal.Errors.process(Errors.java:274)
        at org.glassfish.jersey.internal.Errors.process(Errors.java:244)
        at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265)
        at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:234)
        at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680)
        at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:394)
        at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346)
        at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:366)
        at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:319)
        at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
        at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
        at org.apache.shiro.web.servlet.AbstractShiroFilter.executeChain(AbstractShiroFilter.java:450)
        at org.apache.shiro.web.servlet.AbstractShiroFilter$1.call(AbstractShiroFilter.java:365)
        at org.apache.shiro.subject.support.SubjectCallable.doCall(SubjectCallable.java:90)
        at org.apache.shiro.subject.support.SubjectCallable.call(SubjectCallable.java:83)
        at org.apache.shiro.subject.support.DelegatingSubject.execute(DelegatingSubject.java:387)
        at org.apache.shiro.web.servlet.AbstractShiroFilter.doFilterInternal(AbstractShiroFilter.java:362)
        at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:125)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
        at org.apache.logging.log4j.web.Log4jServletFilter.doFilter(Log4jServletFilter.java:71)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
        at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:202)
        at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97)
        at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:542)
        at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:143)
        at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
        at org.apache.catalina.valves.rewrite.RewriteValve.invoke(RewriteValve.java:555)
        at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:690)
        at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78)
        at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)
        at org.apache.coyote.http2.StreamProcessor.service(StreamProcessor.java:395)
        at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
        at org.apache.coyote.http2.StreamProcessor.process(StreamProcessor.java:73)
        at org.apache.coyote.http2.StreamRunnable.run(StreamRunnable.java:35)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
        at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
        at java.base/java.lang.Thread.run(Thread.java:832)
Caused by: org.apache.catalina.connector.ClientAbortException: java.io.IOException: Stream reset
        at org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:340)
        at org.apache.catalina.connector.InputBuffer.checkByteBufferEof(InputBuffer.java:632)
        at org.apache.catalina.connector.InputBuffer.read(InputBuffer.java:362)
        at org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream.java:132)
        at java.base/java.io.FilterInputStream.read(FilterInputStream.java:132)
        at java.base/java.io.PushbackInputStream.read(PushbackInputStream.java:182)
        at java.base/java.io.FilterInputStream.read(FilterInputStream.java:106)
        at org.glassfish.jersey.message.internal.ReaderWriter.writeTo(ReaderWriter.java:92)
        at org.glassfish.jersey.message.internal.InboundMessageContext.bufferEntity(InboundMessageContext.java:929)
        ... 52 common frames omitted
Caused by: java.io.IOException: Stream reset
        at org.apache.coyote.http2.Stream$StreamInputBuffer.doRead(Stream.java:1045)
        at org.apache.coyote.Request.doRead(Request.java:555)
        at org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:336)
        ... 60 common frames omitted
Comment 25 Mark Thomas 2020-10-29 19:21:15 UTC
The provided stack trace is unrelated to the original issue.

It looks like the connection is being closed as a result of a stream reset being received. That may be normal behaviour.

If you consider this a bug you'll need to open a new issue and provide a minimal test case that reproduces the issue.
Comment 26 noodles 2023-09-05 06:54:39 UTC
Tomcat version is 9.0.80;jdk version is 1.8.0_371
also using http/2, have change to encounter this error
```
I/O error while reading input message; nested exception is org.apache.catalina.connector.ClientAbortException: java.io.IOException: The socket [281,470,413,698,160] associated with this connection has been closed.
org.springframework.http.converter.HttpMessageNotReadableException: I/O error while reading input message; nested exception is org.apache.catalina.connector.ClientAbortException: java.io.IOException: The socket [281,470,413,698,160] associated with this connection has been closed.
	at org.springframework.web.servlet.mvc.method.annotation.AbstractMessageConverterMethodArgumentResolver.readWithMessageConverters(AbstractMessageConverterMethodArgumentResolver.java:217) ~[spring-webmvc-5.2.25.RELEASE.jar:5.2.25.RELEASE]
	at org.springframework.web.servlet.mvc.method.annotation.RequestResponseBodyMethodProcessor.readWithMessageConverters(RequestResponseBodyMethodProcessor.java:158) ~[spring-webmvc-5.2.25.RELEASE.jar:5.2.25.RELEASE]
	at org.springframework.web.servlet.mvc.method.annotation.RequestResponseBodyMethodProcessor.resolveArgument(RequestResponseBodyMethodProcessor.java:131) ~[spring-webmvc-5.2.25.RELEASE.jar:5.2.25.RELEASE]
	at org.springframework.web.method.support.HandlerMethodArgumentResolverComposite.resolveArgument(HandlerMethodArgumentResolverComposite.java:121) ~[spring-web-5.2.25.RELEASE.jar:5.2.25.RELEASE]
	at org.springframework.web.method.support.InvocableHandlerMethod.getMethodArgumentValues(InvocableHandlerMethod.java:167) ~[spring-web-5.2.25.RELEASE.jar:5.2.25.RELEASE]
	at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:134) ~[spring-web-5.2.25.RELEASE.jar:5.2.25.RELEASE]
	at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:105) ~[spring-webmvc-5.2.25.RELEASE.jar:5.2.25.RELEASE]
	at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:878) ~[spring-webmvc-5.2.25.RELEASE.jar:5.2.25.RELEASE]
	at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:792) ~[spring-webmvc-5.2.25.RELEASE.jar:5.2.25.RELEASE]
	at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) ~[spring-webmvc-5.2.25.RELEASE.jar:5.2.25.RELEASE]
	at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1043) ~[spring-webmvc-5.2.25.RELEASE.jar:5.2.25.RELEASE]
	at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:943) ~[spring-webmvc-5.2.25.RELEASE.jar:5.2.25.RELEASE]
	at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006) ~[spring-webmvc-5.2.25.RELEASE.jar:5.2.25.RELEASE]
	at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:909) ~[spring-webmvc-5.2.25.RELEASE.jar:5.2.25.RELEASE]
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:555) ~[tomcat-embed-core-9.0.80.jar:4.0.FR]
	at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883) ~[spring-webmvc-5.2.25.RELEASE.jar:5.2.25.RELEASE]
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:623) ~[tomcat-embed-core-9.0.80.jar:4.0.FR]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:209) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) ~[tomcat-embed-websocket-9.0.80.jar:9.0.80]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:178) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at brave.servlet.TracingFilter.doFilter(TracingFilter.java:68) ~[brave-instrumentation-servlet-5.12.7.jar:?]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:178) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.apache.logging.log4j2.web.Log4j2Filter.doFilter(Log4j2Filter.java:36) ~[log4j2-spring-boot-starter-1.3.50.RELEASE.jar:1.3.50.RELEASE]
	at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:358) ~[spring-web-5.2.25.RELEASE.jar:5.2.25.RELEASE]
	at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:271) ~[spring-web-5.2.25.RELEASE.jar:5.2.25.RELEASE]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:178) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.apache.logging.log4j2.emoji.EmojiFilter.doFilter(EmojiFilter.java:27) ~[log4j2-spring-boot-starter-1.3.50.RELEASE.jar:1.3.50.RELEASE]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:178) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.springframework.web.filter.ShallowEtagHeaderFilter.doFilterInternal(ShallowEtagHeaderFilter.java:106) ~[spring-web-5.2.25.RELEASE.jar:5.2.25.RELEASE]
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) ~[spring-web-5.2.25.RELEASE.jar:5.2.25.RELEASE]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:178) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at com.popicorns.spring.redisson.common.handle.IdempotentFilter.doFilter(IdempotentFilter.java:95) ~[redisson-lock-spring-boot-starter-1.1.38.RELEASE.jar:?]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:178) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) ~[spring-web-5.2.25.RELEASE.jar:5.2.25.RELEASE]
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) ~[spring-web-5.2.25.RELEASE.jar:5.2.25.RELEASE]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:178) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) ~[spring-web-5.2.25.RELEASE.jar:5.2.25.RELEASE]
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) ~[spring-web-5.2.25.RELEASE.jar:5.2.25.RELEASE]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:178) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at com.popicorns.transfer.facade.handle.LogFilter.doFilter(LogFilter.java:52) ~[ec-warehouse-transfer-service-1.0.1-SNAPSHOT.jar:1.0.1-SNAPSHOT]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:178) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at com.popicorns.seata.filter.SeataFilter.doFilter(SeataFilter.java:33) ~[ec-seata-1.4.2.12.jar:1.4.2.12]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:178) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at brave.servlet.TracingFilter.doFilter(TracingFilter.java:87) ~[brave-instrumentation-servlet-5.12.7.jar:?]
	at org.springframework.cloud.sleuth.instrument.web.LazyTracingFilter.doFilter(TraceWebServletAutoConfiguration.java:141) ~[spring-cloud-sleuth-core-2.2.8.RELEASE.jar:2.2.8.RELEASE]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:178) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.doFilterInternal(WebMvcMetricsFilter.java:97) ~[spring-boot-actuator-2.3.12.RELEASE.jar:2.3.12.RELEASE]
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) ~[spring-web-5.2.25.RELEASE.jar:5.2.25.RELEASE]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:178) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) ~[spring-web-5.2.25.RELEASE.jar:5.2.25.RELEASE]
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) ~[spring-web-5.2.25.RELEASE.jar:5.2.25.RELEASE]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:178) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at com.popicorns.commons.auth.service.impl.HttpProtocolAuthUserService.doFilter(HttpProtocolAuthUserService.java:92) ~[ec-auth-services-spring-boot-starter-1.0.55.RELEASE.jar:1.0.55.RELEASE]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:178) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:168) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:481) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:130) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.apache.coyote.http2.StreamProcessor.service(StreamProcessor.java:432) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.apache.coyote.http2.StreamProcessor.process(StreamProcessor.java:90) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.apache.coyote.http2.StreamRunnable.run(StreamRunnable.java:35) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) ~[tomcat-embed-core-9.0.80.jar:9.0.80]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_282]
```
please tell me what can i do
Comment 27 Mark Thomas 2023-09-05 08:26:22 UTC
Don't re-open old issues because you think the stack trace looks similar.

Use the users list to obtain support.