I'm using a WebSocket service that manage a large amount of open connections, sending back messages to clients. The number of concurrent messages can be negligible if compared to the number of active connections. In such a situation I've found the running Tomcat process can use approx. 100KB of memory per open WebSocket connection. Looking at source code I've found some classes with buffers allocated inside the constructor. With the default value of 8192 for org.apache.tomcat.websocket.DEFAULT_BUFFER_SIZE: In org.apache.tomcat.websocket.WsFrameBase - inputBuffer 8KB - messageBufferBinary 8KB - messageBufferText 16KB In org.apache.tomcat.websocket.WsRemoteEndpointImplBase - outputBuffer 8KB - encoderBuffer 8KB With the above buffers this sum up to a 48KB of memory per WebSocket connection. Changing the allocation strategy to an on-demand buffer allocation, in the above situation could reduce the memory footprint by 480MB of memory for 10K active connections. The buffers could also be pooled by a pool manager, reducing the allocation costs.