"tokenize BUG (Text processing)"

danongdanong MemberPosts:3Contributor I
edited May 2019 inHelp
hi i am using tokenizer (text processing),
and using 'specify characters' option,
and my specified character's parameters are symbols and numbers (.:@/_",*$#!?^ ()<>+-%'"[]{}~`0123456789)
so i have gotten my tokens as english,

however when i filter the stopwords (english) and stemming (porter),
i figured that it has a bug,
which the results i obtained does not stem words correctly :


例如,“应用”,“应用”——他们是seperated but not combined,
and what more weird is that it generates a new keyword "appli" which never existed in the original documents.

however, when i use tokenizer (non-letters),
the stemming are correct and they all categorized into 'apply' keywords.

is that a bug or anything else?

how am i going to resolve this problem?

i prefer using 'specify character' option because i would like some special character to be retained.



Thanks.
    Sign InorRegisterto comment.