7.8.15. TokenUnigram
¶
7.8.15.1. Summary¶
TokenUnigram
is similar to TokenBigram. The differences
between them is token unit.
7.8.15.3. Usage¶
If normalizer is used, TokenUnigram
uses white-space-separate like
tokenize method for ASCII characters. TokenUnigram
uses unigram
tokenize method for non-ASCII characters.
If TokenUnigram
tokenize non-ASCII charactors, TokenUnigram
uses
1 character per token as below example.
Execution example:
tokenize TokenUnigram "日本語の勉強" NormalizerAuto
# [
# [
# 0,
# 1546584495.218799,
# 0.0002140998840332031
# ],
# [
# {
# "value": "日",
# "position": 0,
# "force_prefix": false,
# "force_prefix_search": false
# },
# {
# "value": "本",
# "position": 1,
# "force_prefix": false,
# "force_prefix_search": false
# },
# {
# "value": "語",
# "position": 2,
# "force_prefix": false,
# "force_prefix_search": false
# },
# {
# "value": "の",
# "position": 3,
# "force_prefix": false,
# "force_prefix_search": false
# },
# {
# "value": "勉",
# "position": 4,
# "force_prefix": false,
# "force_prefix_search": false
# },
# {
# "value": "強",
# "position": 5,
# "force_prefix": false,
# "force_prefix_search": false
# }
# ]
# ]