Bug 7314

Summary: Bayes.pm, DECOMPOSE_BODY_TOKENS and Unicode
Product: Spamassassin Reporter: azotov
Component: PluginsAssignee: SpamAssassin Developer Mailing List <dev>
Status: NEW ---    
Severity: normal    
Priority: P2    
Version: SVN Trunk (Latest Devel Version)   
Target Milestone: 4.0.0   
Hardware: PC   
OS: FreeBSD   
Whiteboard:
Attachments: suggested patch

Description azotov 2016-04-27 15:10:58 UTC
Created attachment 5387 [details]
suggested patch

Spamassassin fails to generate additional Bayes tokens "Foo", "foo!" and "foo" from original token "Foo!" when the original token contains Unicode characters from non-Latin languages. It happens because \w in regex in Bayes.pm matches only Latin characters and numbers. As a consequence, for example, almost all Cyrillic Unicode characters are deleted by s/[^\w:\*]//gs leading to empty tokens or such weird things as "sk:" tokens.

This problem can be corrected by the attached patch. I have little experience with Unicode in perl, so there can be better solution. The main idea is to make \w match any Unicode word character, not just Latin, and to replace [A-Z] with more generic [[:upper:]].

Maybe it is better to work with Unicode characters not just in DECOMPOSE_BODY_TOKENS section, but everywhere in _tokenize_line sub. This idea was also mentioned in the discussion of Bug 7130. Too many regex in _tokenize_line sub are not working properly for non-Latin Unicode characters now. For example splitting on "..." works only for Latin words, regex in IGNORE_TITLE_CASE sections low-cases only A-Z capital letters and so on.
Comment 1 Mark Martinec 2016-06-15 23:35:31 UTC
> Maybe it is better to work with Unicode characters not just in
> DECOMPOSE_BODY_TOKENS section, but everywhere in _tokenize_line sub ...

Thanks. Yes, there are several problems still associated with
historical assumption of single-byte characters. Some have been
addressed in current trunk code, but there are more, like the one
reported here. To be addressed in the next major version (4.0) ...