Summary: | Allow to limit charsets / encodings supported by Tomcat | ||
---|---|---|---|
Product: | Tomcat 9 | Reporter: | Konstantin Kolinko <knst.kolinko> |
Component: | Catalina | Assignee: | Tomcat Developers Mailing List <dev> |
Status: | NEW --- | ||
Severity: | enhancement | CC: | msjs.sumudini |
Priority: | P2 | ||
Version: | unspecified | ||
Target Milestone: | ----- | ||
Hardware: | PC | ||
OS: | All |
Description
Konstantin Kolinko
2016-01-14 11:17:36 UTC
That's not a bad idea, but is it really practical in production ? Also historically, UTF-8 has caused the most security issues from what I know, and it isn't going to be possible to disable it. +1 anyway, but only as a system property (as you proposed) since it is too global + specific. What is the issue here? IIRC, Tomcat has a cache of Charsets it will use, so a client specifying a little-used charset will just thrash that cache a bit. Chris, the cache has evolved into a static preloaded set some time ago (since r1140156), it is not updated at runtime. The issue here is that client-provided charset name is used for processing both of client-provided data and application-provided data (e.g. forward() processing code touched by the recent fix to bug 58836). Application-provided data usually has some assumptions that the client-provided charset is sane (e.g. superset of US-ASCII). I just am not sure that this assumption is true for all charsets implemented by a JRE - I do not know all of them. E.g. current Java 8 implements 170 charsets, some of which have names starting with "x-". It is easy to enforce the charset (via SetCharacterEncodingFilter), but that will break the whole ability to specify a charset for a client. It is possible to implement a similar Filter that checks the provided charset name (probably over some whitelist). |