Changes

Jump to navigation Jump to search
121 bytes added ,  12:56, 2 July 2020
no edit summary
This article is linked from the [[Full-Text]] page. It gives some insight into the implementation of the full-text features for Japanese text corpora. The Japanese version is [httphttps://files.basex.org/etc/ja-ft.pdf also available as PDF].Thank you to [http://blog.infinite.jp Toshio HIRAI] for integrating the lexer in BaseX!
==Introduction==The lexer was contributed by Toshio HIRAI.
The lexical analysis of Japanese documents is performed by [http://igo.sourceforge.jp/ Igo]. Igo is a ''morphological analyser'',and some of the advantages and reasons for using Igo are:* compatible with the results of a prominent morphological analyzer "MeCab"* it can use the dictionary distributed by the Project MeCab* the morphological analyzer is implemented in Java and is relatively fast=Introduction=
The lexical analysis of Japanese documents is performed by [https://igo.osdn.jp/ Igo]. Igo is a ''morphological analyser'', and some of the advantages and reasons for using Igo are: * Compatible with the results of a prominent morphological analyzer "MeCab".* It can use the dictionary distributed by the Project MeCab.* The morphological analyzer is implemented in Java and is relatively fast. Japanese tokenization will be activated in BaseX if Igo is found in theclasspath. [httphttps://en.sourceforgeosdn.jpnet/projects/igo/releases/ igo-0.4.3.jar]of Igo is currently included in all distributions of BaseX.
In addition to the library, one of the following dictionary files must either be unzipped into the current directory, or into the <code>etc</code> sub-directory of the project’s [[Configuration#Home Directory|Home Directory]]:
* IPA Dictionary: http://files.basex.org/etc/ipadic.zip
* NAIST Dictionary: http://files.basex.org/etc/naistdic.zip
=* IPA Dictionary: https://files.basex.org/etc/ipadic.zip* NAIST Dictionary: https://files.basex.org/etc/naistdic.zip =Lexical Analysis==
The example sentence "私は本を書きました。(I wrote a book.)"
is analyzed as follows.
<presyntaxhighlight>私は本を書きました。
私 名詞,代名詞,一般,*,*,*,私,ワタシ,ワタシ
は 助詞,係助詞,*,*,*,*,は,ハ,ワ
た 助動詞,*,*,*,特殊・タ,基本形,た,タ,タ
。 記号,句点,*,*,*,*,。,。,。
</presyntaxhighlight>
The element of the decomposed part is called "Surface",
The Morpheme component is built as follows:
<presyntaxhighlight>品詞,品詞細分類1,品詞細分類2,品詞細分類3,活用形,活用型,原形,読み,発音
(POS, subtyping POS 1, subtyping POS 2, subtyping POS 3, inflections, use type, prototype, reading, pronunciation)
</presyntaxhighlight>
Of these, the surface is used as a token. Also, The contents of analysis of a
morpheme are used in indexing and stemming.
==Parsing==
During indexing and parsing, the input strings are split into single ''tokens''.
* Auxiliary verb
Thus, in the example above, the "{{Code|"}}, "{{Code|"}}, "and {{Code|書き" }} will be passed to the indexer
for each token.
==Token Processing== "Fullwidth" and "Halfwidth" (which is defined by [https://unicode.org/Public/UNIDATA/EastAsianWidth.txt East Asian Width Properties]) are not distinguished (this is the so-called ZENKAKU/HANKAKU problem).
"Fullwidth" and "Halfwidth" (which is defined by[http://unicode.org/Public/UNIDATA/EastAsianWidth.txt East Asian Width Properties])are not distinguished (this is the so-called ZENKAKU/HANKAKU problem).For example, <code>XML</code> and <code>XML</code> will be treatedas the same word. If documents are ''hybrid'', i.e. written in multiple languages​​,this is also helpful for some other options of the XQuery Full Text Specification,such as the [httphttps://www.w3.org/TR/xpath-full-text-10/#ftcaseoption Case] or the[httphttps://www.w3.org/TR/xpath-full-text-10/#ftdiacriticsoption Diacritics] Optionoption.
==Stemming==
Stemming in Japanese means to analyze the results of morphological analysis
can be led back to the same prototype by analyzing their verb:
<presyntaxhighlight>
書く 動詞,自立,*,*,五段・カ行イ音便,基本形,[書く],カク,カク
書い 動詞,自立,*,*,五段・カ行イ音便,連用タ接続,[書く],カイ,カイ
た 助動詞,*,*,*,特殊・タ,基本形,た,タ,タ
</presyntaxhighlight>
Because the "auxiliary verb" is always excluded from the tokens, there is
is returned for the following two types of queries:
<pre classsyntaxhighlight lang="brush:xquery">
'私は本を書いた' contains text '書く' using stemming using language 'ja'
'私は本を書く' contains text '書いた' using stemming using language 'ja'
</presyntaxhighlight>
==Wildcards==
The Wildcard option in XQuery Full-Text is available for Japanese as well.
queries both return <code>true</code>:
<pre classsyntaxhighlight lang="brush:xquery">
'芥川龍之介' contains text '.之介' using wildcards using language 'ja'
'芥川竜之介' contains text '.之介' using wildcards using language 'ja'
</presyntaxhighlight>
However, there is a special case that requires attention. The following
query will yield <code>false</code>:
<pre classsyntaxhighlight lang="brush:xquery">
'芥川龍之介' contains text '芥川.之介' using wildcards using language 'ja'
</presyntaxhighlight>
This is because the next word boundary metacharacters
an additional whitespaces as word boundary:
<pre classsyntaxhighlight lang="brush:xquery">
'芥川龍之介' contains text '芥川 .之介' using wildcards using language 'ja'
</presyntaxhighlight>
As an alternative, you may modify the query as follows:
<pre classsyntaxhighlight lang="brush:xquery">
'芥川龍之介' contains text '芥川' ftand '.之介' using wildcards using language 'ja'
</presyntaxhighlight>
Bureaucrats, editor, reviewer, Administrators
13,550

edits

Navigation menu