Difference between revisions of "Full-Text"

From BaseX Documentation
Jump to navigation Jump to search
m (Text replacement - "syntaxhighlight" to "pre")
 
(21 intermediate revisions by the same user not shown)
Line 1: Line 1:
This article is part of the [[XQuery|XQuery Portal]]. It summarizes the features of the [https://www.w3.org/TR/xpath-full-text-10/ W3C XQuery Full Text 1.0] Recommendation, and custom features of the implementation in BaseX.
+
This article is part of the [[XQuery|XQuery Portal]]. It summarizes the features of the [https://www.w3.org/TR/xpath-full-text-10/ W3C XQuery Full Text] Recommendation, and custom features of the implementation in BaseX.
  
 
Please read the separate [[Indexes#Full-Text Index|Full-Text Index]] section in our documentation if you want to learn how to evaluate full-text requests on large databases within milliseconds.
 
Please read the separate [[Indexes#Full-Text Index|Full-Text Index]] section in our documentation if you want to learn how to evaluate full-text requests on large databases within milliseconds.
Line 11: Line 11:
 
This is a simple example for a basic full-text expression:
 
This is a simple example for a basic full-text expression:
  
<syntaxhighlight lang="xquery">
+
<pre lang='xquery'>
 
"This is YOUR World" contains text "your world"
 
"This is YOUR World" contains text "your world"
</syntaxhighlight>
+
</pre>
  
 
It yields {{Code|true}}, because the search string is ''tokenized'' before it is compared with the tokenized input string. In the tokenization process, several normalizations take place. Many of those steps can hardly be simulated with plain XQuery: as an example, upper/lower case and diacritics (umlauts, accents, etc.) are removed and an optional, language-dependent stemming algorithm is applied. Beside that, special characters such as whitespaces and punctuation marks will be ignored. Thus, this query also yields true:
 
It yields {{Code|true}}, because the search string is ''tokenized'' before it is compared with the tokenized input string. In the tokenization process, several normalizations take place. Many of those steps can hardly be simulated with plain XQuery: as an example, upper/lower case and diacritics (umlauts, accents, etc.) are removed and an optional, language-dependent stemming algorithm is applied. Beside that, special characters such as whitespaces and punctuation marks will be ignored. Thus, this query also yields true:
  
<syntaxhighlight lang="xquery">
+
<pre lang='xquery'>
 
"Well... Done!" contains text "well, done"
 
"Well... Done!" contains text "well, done"
</syntaxhighlight>
+
</pre>
  
 
The {{Code|occurs}} keyword comes into play when more than one occurrence of a token is to be found:
 
The {{Code|occurs}} keyword comes into play when more than one occurrence of a token is to be found:
  
<syntaxhighlight lang="xquery">
+
<pre lang='xquery'>
 
"one and two and three" contains text "and" occurs at least 2 times
 
"one and two and three" contains text "and" occurs at least 2 times
</syntaxhighlight>
+
</pre>
  
 
Various range modifiers are available: {{Code|exactly}}, {{Code|at least}}, {{Code|at most}}, and {{Code|from ... to ...}}.
 
Various range modifiers are available: {{Code|exactly}}, {{Code|at least}}, {{Code|at most}}, and {{Code|from ... to ...}}.
Line 33: Line 33:
 
In the given example, curly braces are used to combine multiple keywords:
 
In the given example, curly braces are used to combine multiple keywords:
  
<syntaxhighlight lang="xquery">
+
<pre lang='xquery'>
 
for $country in doc('factbook')//country
 
for $country in doc('factbook')//country
 
where $country//religions[text() contains text { 'Sunni', 'Shia' } any]
 
where $country//religions[text() contains text { 'Sunni', 'Shia' } any]
 
return $country/name
 
return $country/name
</syntaxhighlight>
+
</pre>
  
 
The query will output the names of all countries with a religion element containing {{Code|sunni}} or {{Code|shia}}. The {{Code|any}} keyword is optional; it can be replaced with:
 
The query will output the names of all countries with a religion element containing {{Code|sunni}} or {{Code|shia}}. The {{Code|any}} keyword is optional; it can be replaced with:
Line 48: Line 48:
 
The keywords {{Code|ftand}}, {{Code|ftor}} and {{Code|ftnot}} can also be used to combine multiple query terms. The following query yields the same result as the last one does:
 
The keywords {{Code|ftand}}, {{Code|ftor}} and {{Code|ftnot}} can also be used to combine multiple query terms. The following query yields the same result as the last one does:
  
<syntaxhighlight lang="xquery">
+
<pre lang='xquery'>
 
doc('factbook')//country[descendant::religions contains text 'sunni' ftor 'shia']/name
 
doc('factbook')//country[descendant::religions contains text 'sunni' ftor 'shia']/name
</syntaxhighlight>
+
</pre>
  
 
The keywords {{Code|not in}} are special: they are used to find tokens which are not part of a longer token sequence:
 
The keywords {{Code|not in}} are special: they are used to find tokens which are not part of a longer token sequence:
  
<syntaxhighlight lang="xquery">
+
<pre lang='xquery'>
 
for $text in ("New York", "new conditions")
 
for $text in ("New York", "new conditions")
 
return $text contains text "New" not in "New York"
 
return $text contains text "New" not in "New York"
</syntaxhighlight>
+
</pre>
  
 
Due to the complex data model of the XQuery Full Text spec, the usage of {{Code|ftand}} may lead to a high memory consumption. If you should encounter problems, simply use the {{Code|all}} keyword:
 
Due to the complex data model of the XQuery Full Text spec, the usage of {{Code|ftand}} may lead to a high memory consumption. If you should encounter problems, simply use the {{Code|all}} keyword:
  
<syntaxhighlight lang="xquery">
+
<pre lang='xquery'>
 
doc('factbook')//country[descendant::religions contains text { 'Christian', 'Jewish'} all]/name
 
doc('factbook')//country[descendant::religions contains text { 'Christian', 'Jewish'} all]/name
</syntaxhighlight>
+
</pre>
  
 
==Positional Filters==
 
==Positional Filters==
Line 69: Line 69:
 
A popular retrieval operation is to filter texts by the distance of the searched words. In this query…
 
A popular retrieval operation is to filter texts by the distance of the searched words. In this query…
  
<syntaxhighlight lang="xquery">
+
<pre lang='xquery'>
 
<xml>
 
<xml>
 
   <text>There is some reason why ...</text>
 
   <text>There is some reason why ...</text>
Line 75: Line 75:
 
   <text>The reason why some people ...</text>
 
   <text>The reason why some people ...</text>
 
</xml>//text[. contains text { "some", "reason" } all ordered distance at most 3 words]
 
</xml>//text[. contains text { "some", "reason" } all ordered distance at most 3 words]
</syntaxhighlight>
+
</pre>
  
 
…the two first texts will be returned as result, because there are at most three words between {{Code|some}} and {{Code|reason}}. Additionally, the {{Code|ordered}} keyword ensures that the words are found in the specified order, which is why the third text is excluded. Note that {{Code|all}} is required here to guarantee that only those hits will be accepted that contain all searched words.
 
…the two first texts will be returned as result, because there are at most three words between {{Code|some}} and {{Code|reason}}. Additionally, the {{Code|ordered}} keyword ensures that the words are found in the specified order, which is why the third text is excluded. Note that {{Code|all}} is required here to guarantee that only those hits will be accepted that contain all searched words.
Line 81: Line 81:
 
The {{Code|window}} keyword is related: it accepts those texts in which all keyword occur within the specified number of tokens. Can you guess what is returned by the following query?
 
The {{Code|window}} keyword is related: it accepts those texts in which all keyword occur within the specified number of tokens. Can you guess what is returned by the following query?
  
<syntaxhighlight lang="xquery">
+
<pre lang='xquery'>
 
("A C D", "A B C D E")[. contains text { "A", "E" } all window 3 words]
 
("A C D", "A B C D E")[. contains text { "A", "E" } all window 3 words]
</syntaxhighlight>
+
</pre>
  
 
Sometimes it is interesting to only select texts in which all searched terms occur in the {{Code|same sentence}} or {{Code|paragraph}} (you can even filter for {{Code|different}} sentences/paragraphs). This is obviously not the case in the following example:
 
Sometimes it is interesting to only select texts in which all searched terms occur in the {{Code|same sentence}} or {{Code|paragraph}} (you can even filter for {{Code|different}} sentences/paragraphs). This is obviously not the case in the following example:
  
<syntaxhighlight lang="xquery">
+
<pre lang='xquery'>
 
'Mary told me, “I will survive!”.' contains text { 'will', 'told' } all words same sentence
 
'Mary told me, “I will survive!”.' contains text { 'will', 'told' } all words same sentence
</syntaxhighlight>
+
</pre>
  
 
By the way: In some examples above, the {{Code|words}} unit was used, but {{Code|sentences}} and {{Code|paragraphs}} would have been valid alternatives.
 
By the way: In some examples above, the {{Code|words}} unit was used, but {{Code|sentences}} and {{Code|paragraphs}} would have been valid alternatives.
Line 104: Line 104:
  
 
* If {{Code|case}} is insensitive, no distinction is made between characters in upper and lower case. By default, the option is {{Code|insensitive}}; it can also be set to {{Code|sensitive}}:
 
* If {{Code|case}} is insensitive, no distinction is made between characters in upper and lower case. By default, the option is {{Code|insensitive}}; it can also be set to {{Code|sensitive}}:
<syntaxhighlight lang="xquery">
+
<pre lang='xquery'>
 
"Respect Upper Case" contains text "Upper" using case sensitive
 
"Respect Upper Case" contains text "Upper" using case sensitive
</syntaxhighlight>
+
</pre>
 
* If {{Code|diacritics}} is insensitive, characters with and without diacritics (umlauts, characters with accents) are declared as identical. By default, the option is {{Code|insensitive}}; it can also be set to {{Code|sensitive}}:
 
* If {{Code|diacritics}} is insensitive, characters with and without diacritics (umlauts, characters with accents) are declared as identical. By default, the option is {{Code|insensitive}}; it can also be set to {{Code|sensitive}}:
<syntaxhighlight lang="xquery">
+
<pre lang='xquery'>
 
"'Äpfel' will not be found..." contains text "Apfel" using diacritics sensitive
 
"'Äpfel' will not be found..." contains text "Apfel" using diacritics sensitive
</syntaxhighlight>
+
</pre>
 
* If {{Code|stemming}} is activated, words are shortened to a base form by a language-specific stemmer:
 
* If {{Code|stemming}} is activated, words are shortened to a base form by a language-specific stemmer:
<syntaxhighlight lang="xquery">
+
<pre lang='xquery'>
 
"catch" contains text "catches" using stemming
 
"catch" contains text "catches" using stemming
</syntaxhighlight>
+
</pre>
 
* With the {{Code|stop words}} option, a list of words can be defined that will be ignored when tokenizing a string. This is particularly helpful if the full-text index takes too much space (a standard stopword list for English texts is provided in the directory {{Code|etc/stopwords.txt}} in the full distributions of BaseX, and available online at http://files.basex.org/etc/stopwords.txt):
 
* With the {{Code|stop words}} option, a list of words can be defined that will be ignored when tokenizing a string. This is particularly helpful if the full-text index takes too much space (a standard stopword list for English texts is provided in the directory {{Code|etc/stopwords.txt}} in the full distributions of BaseX, and available online at http://files.basex.org/etc/stopwords.txt):
<syntaxhighlight lang="xquery">
+
<pre lang='xquery'>
 
"You and me" contains text "you or me" using stop words ("and", "or"),
 
"You and me" contains text "you or me" using stop words ("and", "or"),
 
"You and me" contains text "you or me" using stop words at "http://files.basex.org/etc/stopwords.txt"
 
"You and me" contains text "you or me" using stop words at "http://files.basex.org/etc/stopwords.txt"
</syntaxhighlight>
+
</pre>
 
* Related terms such as synonyms can be found with the sophisticated [[#Thesaurus|Thesaurus]] option.
 
* Related terms such as synonyms can be found with the sophisticated [[#Thesaurus|Thesaurus]] option.
  
Line 130: Line 130:
 
* <code>.{min,max}</code> matches ''min''–''max'' number of characters.
 
* <code>.{min,max}</code> matches ''min''–''max'' number of characters.
  
<syntaxhighlight lang="xquery">
+
<pre lang='xquery'>
 
"This may be interesting in the year 2000" contains text { "interest.*", "2.{3,3}" } using wildcards
 
"This may be interesting in the year 2000" contains text { "interest.*", "2.{3,3}" } using wildcards
</syntaxhighlight>
+
</pre>
  
 
This was a quick introduction to XQuery Full Text; you are invited to explore the numerous other features of the language!
 
This was a quick introduction to XQuery Full Text; you are invited to explore the numerous other features of the language!
Line 143: Line 143:
 
A list of all language codes that are available on your system can be retrieved as follows:
 
A list of all language codes that are available on your system can be retrieved as follows:
  
<syntaxhighlight lang="xquery">
+
<pre lang='xquery'>
 
declare namespace locale = "java:java.util.Locale";
 
declare namespace locale = "java:java.util.Locale";
 
distinct-values(locale:getAvailableLocales() ! locale:getLanguage(.))
 
distinct-values(locale:getAvailableLocales() ! locale:getLanguage(.))
</syntaxhighlight>
+
</pre>
  
 
By default, unless the languages codes <code>ja</code>, <code>ar</code>, <code>ko</code>, <code>th</code>, or <code>zh</code> are specified, a tokenizer for Western texts is used:
 
By default, unless the languages codes <code>ja</code>, <code>ar</code>, <code>ko</code>, <code>th</code>, or <code>zh</code> are specified, a tokenizer for Western texts is used:
Line 164: Line 164:
 
The following two queries, which both return <code>true</code>, demonstrate that stemming depends on the selected language:
 
The following two queries, which both return <code>true</code>, demonstrate that stemming depends on the selected language:
  
<syntaxhighlight lang="xquery">
+
<pre lang='xquery'>
 
"Indexing" contains text "index" using stemming,
 
"Indexing" contains text "index" using stemming,
 
"häuser" contains text "haus" using stemming using language "German"
 
"häuser" contains text "haus" using stemming using language "German"
</syntaxhighlight>
+
</pre>
  
 
==Scoring==
 
==Scoring==
Line 175: Line 175:
 
The scoring model of BaseX takes into consideration the number of found terms, their frequency in a text, and the length of a text. The shorter the input text is, the higher scores will be:
 
The scoring model of BaseX takes into consideration the number of found terms, their frequency in a text, and the length of a text. The shorter the input text is, the higher scores will be:
  
<syntaxhighlight lang="xquery">
+
<pre lang='xquery'>
 
(: Score values: 1 0.62 0.45 :)
 
(: Score values: 1 0.62 0.45 :)
for $text score $score in ("A", "A B", "A B C")[. contains text "A"]
+
for $text in ("A", "A B", "A B C")
 +
let score $score := $text contains text "A"
 
order by $score descending
 
order by $score descending
 
return <hit score='{ format-number($score, "0.00") }'>{ $text }</hit>
 
return <hit score='{ format-number($score, "0.00") }'>{ $text }</hit>
</syntaxhighlight>
+
</pre>
  
This simple approach has proven to consistently deliver good results, and in particular when little is known about the structure of the queried XML documents.
+
This simple approach has proven to consistently deliver good results, in particular when little is known about the structure of the queried XML documents.
  
Please note that scores will only be computed if a parent expression requests them:
+
Scoring values can be further processed to compute custom values:
  
<syntaxhighlight lang="xquery">
+
<pre lang='xquery'>
(: Computes and returns a scoring value. :)
+
let $terms := ('a', 'b')
let score $score := <x>Hello Universe</x> contains text "hello"
+
let $scores := ft:score($terms ! ('a b c' contains text { . }))
return $score
+
return avg($scores)
 +
</pre>
  
(: No scoring value will be computed here. :)
+
Scoring is supported within full-text expressions, by {{Function|Full-Text|ft:search}}, and by simple predicate tests that can be rewritten to {{Function|Full-Text|ft:search}}:
let $result := <x>Hello Universe</x> contains text "hello"
 
let score $score := $result
 
return $score
 
</syntaxhighlight>
 
  
Scores will be propagated by the {{Code|and}} and {{Code|or}} expressions and in predicates. In the following query, all returned scores are equal:
+
<pre lang='xquery'>
 +
let $string := 'a b'
 +
return ft:score($string contains text 'a' ftand 'b'),
  
<syntaxhighlight lang="xquery">
+
for $n score $s in ft:search('factbook', 'orthodox')
let $text := "A B C"
+
order by $s descending
let score $s1 := $text contains text "A" ftand "B C"
+
return $s || ': ' || $n,
let score $s2 := $text contains text "A" ftand "B C"
+
 
let score $s3 := $text contains text "A" and $text contains text "B C"
+
for $n score $s in db:get('factbook')//text()[. contains text 'orthodox']
let score $s4 := $text contains text "A" or $text contains text "B C"
+
order by $s descending
let score $s5 := $text[. contains text "A"][. contains text "B C"]
+
return $s || ': ' || $n
return ($s1, $s2, $s3, $s4, $s5)
+
</pre>
</syntaxhighlight>
 
  
 
==Thesaurus==
 
==Thesaurus==
  
BaseX supports full-text queries using thesauri, but it does not provide a default thesaurus. This is why queries such as:
+
One or more thesaurus files can be specified in a full-text expression. The following query returns {{Code|false}}:
  
<syntaxhighlight lang="xquery">
+
<pre lang='xquery'>
'computers' contains text 'hardware'
+
'hardware' contains text 'computers'
 
   using thesaurus default
 
   using thesaurus default
</syntaxhighlight>
+
</pre>
 +
 
 +
If a thesaurus is employed…
 +
 
 +
<pre lang="xml">
 +
<thesaurus xmlns="http://www.w3.org/2007/xqftts/thesaurus">
 +
  <entry>
 +
    <term>computers</term>
 +
    <synonym>
 +
      <term>hardware</term>
 +
      <relationship>NT</relationship>
 +
    </synonym>
 +
  </entry>
 +
</thesaurus>
 +
</pre>
 +
 
 +
…the result will be {{Code|true}}:
 +
 
 +
<pre lang='xquery'>
 +
'hardware' contains text 'computers'
 +
  using thesaurus at 'thesaurus.xml'
 +
</pre>
 +
 
 +
Thesaurus files must comply with the [https://dev.w3.org/2007/xpath-full-text-10-test-suite/TestSuiteStagingArea/TestSources/thesaurus.xsd XSD Schema] of the XQFT Test Suite (but the namespace can be omitted). Apart from the relationship defined in [https://www.iso.org/standard/7776.html ISO 2788] (NT: narrower team, RT: related term, etc.), custom relationships can be used.
  
will return <code>false</code>. However, if the thesaurus is specified, then the result will be <code>true</code>:
+
The type of relationship and the level depth can be specified as well:
  
<syntaxhighlight lang="xquery">
+
<pre lang='xquery'>
 +
(: BT: find broader terms; NT means narrower term :)
 
'computers' contains text 'hardware'
 
'computers' contains text 'hardware'
   using thesaurus at 'XQFTTS_1_0_4/TestSources/usability2.xml'
+
   using thesaurus at 'x.xml' relationship 'BT' from 1 to 10 levels
</syntaxhighlight>
+
</pre>
  
The format of the thesaurus files must be the same as the format of the thesauri provided by the [https://dev.w3.org/2007/xpath-full-text-10-test-suite XQuery and XPath Full Text 1.0 Test Suite]. It is an XML with structure defined by an [https://dev.w3.org/2007/xpath-full-text-10-test-suite/TestSuiteStagingArea/TestSources/thesaurus.xsd XSD Schema].
+
More details can be found in the [https://www.w3.org/TR/xpath-full-text-10/#ftthesaurusoption specification].
  
 
==Fuzzy Querying==
 
==Fuzzy Querying==
Line 232: Line 255:
  
 
'''Document 'doc.xml'''':
 
'''Document 'doc.xml'''':
<syntaxhighlight lang="xml">
+
<pre lang="xml">
 
<doc>
 
<doc>
 
   <a>house</a>
 
   <a>house</a>
Line 238: Line 261:
 
   <a>haus</a>
 
   <a>haus</a>
 
</doc>
 
</doc>
</syntaxhighlight>  
+
</pre>  
 
   
 
   
 
'''Query:'''
 
'''Query:'''
<syntaxhighlight lang="xquery">
+
<pre lang='xquery'>
 
//a[text() contains text 'house' using fuzzy]
 
//a[text() contains text 'house' using fuzzy]
</syntaxhighlight>
+
</pre>
 
   
 
   
 
'''Result:'''
 
'''Result:'''
<syntaxhighlight lang="xml">
+
<pre lang="xml">
 
<a>house</a>
 
<a>house</a>
 
<a>hous</a>
 
<a>hous</a>
</syntaxhighlight>
+
</pre>
 +
 
 +
Fuzzy search is based on the Levenshtein distance. The maximum number of allowed errors is calculated by dividing the token length of a specified query term by 4. The query above yields two results as there is no error between the query term “house” and the text node “house”, and one error between “house” and “hous”.
  
Fuzzy search is based on the Levenshtein distance. The maximum number of allowed errors is calculated by dividing the token length of a specified query term by 4, preserving a minimum of 1 errors. A static error distance can be set by adjusting the {{Option|LSERROR}} option (default: <code>SET LSERROR 0</code>). The query above yields two results as there is no error between the query term “house” and the text node “house”, and one error between “house” and “hous”.
+
A user-defined value can be adjusted globally via the {{Option|LSERROR}} option or via an additional argument:
  
Fuzzy search is also supported by the full-text index.
+
<pre lang='xquery'>
 +
//a[text() contains text 'house' using fuzzy 3 errors]
 +
</pre>
  
 
=Mixed Content=
 
=Mixed Content=
Line 259: Line 286:
 
When working with so-called narrative XML documents, such as HTML, [https://tei-c.org/ TEI], or [https://docbook.org/ DocBook] documents, you typically have ''mixed content'', i.e., elements containing a mix of text and markup, such as:
 
When working with so-called narrative XML documents, such as HTML, [https://tei-c.org/ TEI], or [https://docbook.org/ DocBook] documents, you typically have ''mixed content'', i.e., elements containing a mix of text and markup, such as:
  
<syntaxhighlight lang="xml">
+
<pre lang="xml">
 
<p>This is only an illustrative <hi>example</hi>, not a <q>real</q> text.</p>
 
<p>This is only an illustrative <hi>example</hi>, not a <q>real</q> text.</p>
</syntaxhighlight>
+
</pre>
  
 
Since the logical flow of the text is not interrupted by the child elements, you will typically want to search across elements, so that the above paragraph would match a search for “real text”.  For more examples, see [https://www.w3.org/TR/xpath-full-text-10-use-cases/#Across XQuery and XPath Full Text 1.0 Use Cases].
 
Since the logical flow of the text is not interrupted by the child elements, you will typically want to search across elements, so that the above paragraph would match a search for “real text”.  For more examples, see [https://www.w3.org/TR/xpath-full-text-10-use-cases/#Across XQuery and XPath Full Text 1.0 Use Cases].
Line 267: Line 294:
 
To enable this kind of searches, it is recommendable to:
 
To enable this kind of searches, it is recommendable to:
  
* Turn off ''whitespace chopping'' when importing XML documents. This can be done by setting {{Option|CHOP}} to <code>OFF</code>. This can also be done in the GUI if a new database is created (''Database'' → ''New…'' → ''Parsing'' → ''Chop Whitespaces'').
+
* Keep ''whitespace stripping'' turned off when importing XML documents. This can be done by ensuring that {{Option|STRIPWS}} is disabled. This can also be done in the GUI if a new database is created (''Database'' → ''New…'' → ''Parsing'' → ''Strip Whitespaces'').
* Turn off automatic indentation by assigning <code>indent=no</code> to the {{Option|SERIALIZER}} option.
+
* Keep automatic indentation turned off. Ensure that the [[Serialization|serialization parameter]] {{Code|indent}} is set to {{Code|no}}.
  
A query such as <code>//p[. contains text 'real text']</code> will then match the example paragraph above.  However, the full-text index will '''not''' be used in this query, so it may take a long time.  The full-text index would be used for the query <code>//p[text() contains text 'real text']</code>, but this query will not find the example paragraph, because the matching text is split over two text nodes.
+
A query such as <code>//p[. contains text 'real text']</code> will then match the example paragraph above.  However, the full-text index will '''not''' be used in this query, so it may take a long time.  The full-text index would be used for the query <code>//p[text() contains text 'real text']</code>, but this query will not find the example paragraph because the matching text is split over two text nodes.
  
Note that the node structure is ignored by the full-text tokenizer: The {{Code|contains text}} expression applies all full-text operations to the ''string value'' of its left operand. As a consequence, the <code>ft:mark</code> and <code>ft:extract</code> functions (see [[Full-Text Module|Full-Text Functions]]) will only yield useful results if they are applied to single text nodes, as the following example demonstrates:
+
Note that the node structure is ignored by the full-text tokenizer: The {{Code|contains text}} expression applies all full-text operations to the ''string value'' of its left operand. As a consequence, the {{Function|Full-Text|ft:mark}} and {{Function|Full-Text|ft:extract}} functions will only yield useful results if they are applied to single text nodes, as the following example demonstrates:
  
<syntaxhighlight lang="xquery">
+
<pre lang='xquery'>
 
(: Structure is ignored; no highlighting: :)
 
(: Structure is ignored; no highlighting: :)
 
ft:mark(//p[. contains text 'real'])
 
ft:mark(//p[. contains text 'real'])
 
(: Single text nodes are addressed: results will be highlighted: :)
 
(: Single text nodes are addressed: results will be highlighted: :)
 
ft:mark(//p[.//text() contains text 'real'])
 
ft:mark(//p[.//text() contains text 'real'])
</syntaxhighlight>
+
</pre>
  
BaseX does '''not''' support the ''ignore option'' (<code>without content</code>) of the [https://www.w3.org/TR/xpath-full-text-10/#ftignoreoption W3C XQuery Full Text 1.0] Recommendation. If you want to ignore descendant element content, such as footnotes or other material that does not belong to the same logical text flow, you can build a second database from and exclude all information you do not want to search for. See the following example (visit [[XQuery Update]] to learn more about updates):
+
BaseX does '''not''' support the ''ignore option'' (<code>without content</code>) of the [https://www.w3.org/TR/xpath-full-text-10/#ftignoreoption W3C XQuery Full Text 1.0] Recommendation. If you want to ignore descendant element content, such as footnotes or other material that does not belong to the same logical text flow, you can build a second database from and exclude all information you want to avoid searching for. See the following example (visit [[XQuery Update]] to learn more about updates):
  
<syntaxhighlight lang="xquery">
+
<pre lang='xquery'>
let $docs := db:open('docs')
+
let $docs := db:get('docs')
 
return db:create(
 
return db:create(
 
   'index-db',
 
   'index-db',
Line 293: Line 320:
 
   map { 'ftindex': true() }
 
   map { 'ftindex': true() }
 
)
 
)
</syntaxhighlight>
+
</pre>
  
 
=Functions=
 
=Functions=
Line 332: Line 359:
 
* If a default collation is specified, it applies to all collation-dependent string operations in the query. The following expression yields <code>true</code>:
 
* If a default collation is specified, it applies to all collation-dependent string operations in the query. The following expression yields <code>true</code>:
  
<syntaxhighlight lang="xquery">
+
<pre lang='xquery'>
 
declare default collation 'http://basex.org/collation?lang=de;strength=secondary';
 
declare default collation 'http://basex.org/collation?lang=de;strength=secondary';
 
'Straße' = 'Strasse'
 
'Straße' = 'Strasse'
</syntaxhighlight>
+
</pre>
  
 
* Collations can also be specified in {{Code|order by}} and {{Code|group by}} clauses of FLWOR expressions. This query returns {{Code|à plutôt! bonjour!}}:
 
* Collations can also be specified in {{Code|order by}} and {{Code|group by}} clauses of FLWOR expressions. This query returns {{Code|à plutôt! bonjour!}}:
  
<syntaxhighlight lang="xquery">
+
<pre lang='xquery'>
 
for $w in ("bonjour!", "à plutôt!") order by $w collation "?lang=fr" return $w
 
for $w in ("bonjour!", "à plutôt!") order by $w collation "?lang=fr" return $w
</syntaxhighlight>
+
</pre>
  
 
* Various string function exists that take an optional collation as argument: The following functions give us {{Code|a}} and {{Code|1 2 3}} as results:
 
* Various string function exists that take an optional collation as argument: The following functions give us {{Code|a}} and {{Code|1 2 3}} as results:
  
<syntaxhighlight lang="xquery"><nowiki>
+
<pre lang='xquery'><nowiki>
 
distinct-values(("a", "á", "à"), "?lang=it-IT;strength=primary"),
 
distinct-values(("a", "á", "à"), "?lang=it-IT;strength=primary"),
 
index-of(("a", "á", "à"), "a", "?lang=it-IT;strength=primary")
 
index-of(("a", "á", "à"), "a", "?lang=it-IT;strength=primary")
</nowiki></syntaxhighlight>
+
</nowiki></pre>
  
 
If the [http://site.icu-project.org/download ICU Library] is added to the classpath, the full [https://www.w3.org/TR/xpath-functions-31/#uca-collations Unicode Collation Algorithm] features become available:
 
If the [http://site.icu-project.org/download ICU Library] is added to the classpath, the full [https://www.w3.org/TR/xpath-functions-31/#uca-collations Unicode Collation Algorithm] features become available:
  
<syntaxhighlight lang="xquery">
+
<pre lang='xquery'>
 
(: returns 0 (both strings are compared as equal) :)
 
(: returns 0 (both strings are compared as equal) :)
 
compare('a-b', 'ab', 'http://www.w3.org/2013/collation/UCA?alternate=shifted')
 
compare('a-b', 'ab', 'http://www.w3.org/2013/collation/UCA?alternate=shifted')
</syntaxhighlight>
+
</pre>
  
 
=Changelog=
 
=Changelog=
 +
 +
; Version 9.6
 +
* Updated: [[#Fuzzy_Querying|Fuzzy Querying]]: Specify Levenshtein error
 +
 +
; Version 9.5:
 +
* Removed: Scoring propagation.
  
 
; Version 9.2:
 
; Version 9.2:
 
 
* Added: Arabic stemmer.
 
* Added: Arabic stemmer.
  
 
; Version 8.0:
 
; Version 8.0:
 
 
* Updated: [[#Scoring|Scores]] will be propagated by the {{Code|and}} and {{Code|or}} expressions and in predicates.
 
* Updated: [[#Scoring|Scores]] will be propagated by the {{Code|and}} and {{Code|or}} expressions and in predicates.
  
 
; Version 7.7:
 
; Version 7.7:
 
 
* Added: [[#Collations|Collations]] support.
 
* Added: [[#Collations|Collations]] support.
  
 
; Version 7.3:
 
; Version 7.3:
 
 
* Removed: Trie index, which was specialized on wildcard queries. The fuzzy index now supports both wildcard and fuzzy queries.
 
* Removed: Trie index, which was specialized on wildcard queries. The fuzzy index now supports both wildcard and fuzzy queries.
 
* Removed: TF/IDF scoring was discarded in favor of the internal scoring model.
 
* Removed: TF/IDF scoring was discarded in favor of the internal scoring model.

Latest revision as of 17:38, 1 December 2023

This article is part of the XQuery Portal. It summarizes the features of the W3C XQuery Full Text Recommendation, and custom features of the implementation in BaseX.

Please read the separate Full-Text Index section in our documentation if you want to learn how to evaluate full-text requests on large databases within milliseconds.

Introduction[edit]

The XQuery and XPath Full Text Recommendation (XQFT) is a feature-rich extension of the XQuery language. It can be used to both query XML documents and single strings for words and phrases. BaseX was the first query processor that supported all features of the specification.

This section gives you a quick insight into the most important features of the language.

This is a simple example for a basic full-text expression:

"This is YOUR World" contains text "your world"

It yields true, because the search string is tokenized before it is compared with the tokenized input string. In the tokenization process, several normalizations take place. Many of those steps can hardly be simulated with plain XQuery: as an example, upper/lower case and diacritics (umlauts, accents, etc.) are removed and an optional, language-dependent stemming algorithm is applied. Beside that, special characters such as whitespaces and punctuation marks will be ignored. Thus, this query also yields true:

"Well... Done!" contains text "well, done"

The occurs keyword comes into play when more than one occurrence of a token is to be found:

"one and two and three" contains text "and" occurs at least 2 times

Various range modifiers are available: exactly, at least, at most, and from ... to ....

Combining Results[edit]

In the given example, curly braces are used to combine multiple keywords:

for $country in doc('factbook')//country
where $country//religions[text() contains text { 'Sunni', 'Shia' } any]
return $country/name

The query will output the names of all countries with a religion element containing sunni or shia. The any keyword is optional; it can be replaced with:

  • all: all strings need to be found
  • any word: any of the single words within the specified strings need to be found
  • all words: all single words within the specified strings need to be found
  • phrase: all strings need to be found as a single phrase

The keywords ftand, ftor and ftnot can also be used to combine multiple query terms. The following query yields the same result as the last one does:

doc('factbook')//country[descendant::religions contains text 'sunni' ftor 'shia']/name

The keywords not in are special: they are used to find tokens which are not part of a longer token sequence:

for $text in ("New York", "new conditions")
return $text contains text "New" not in "New York"

Due to the complex data model of the XQuery Full Text spec, the usage of ftand may lead to a high memory consumption. If you should encounter problems, simply use the all keyword:

doc('factbook')//country[descendant::religions contains text { 'Christian', 'Jewish'} all]/name

Positional Filters[edit]

A popular retrieval operation is to filter texts by the distance of the searched words. In this query…

<xml>
  <text>There is some reason why ...</text>
  <text>For some good yet unknown reason, ...</text>
  <text>The reason why some people ...</text>
</xml>//text[. contains text { "some", "reason" } all ordered distance at most 3 words]

…the two first texts will be returned as result, because there are at most three words between some and reason. Additionally, the ordered keyword ensures that the words are found in the specified order, which is why the third text is excluded. Note that all is required here to guarantee that only those hits will be accepted that contain all searched words.

The window keyword is related: it accepts those texts in which all keyword occur within the specified number of tokens. Can you guess what is returned by the following query?

("A C D", "A B C D E")[. contains text { "A", "E" } all window 3 words]

Sometimes it is interesting to only select texts in which all searched terms occur in the same sentence or paragraph (you can even filter for different sentences/paragraphs). This is obviously not the case in the following example:

'Mary told me, “I will survive!”.' contains text { 'will', 'told' } all words same sentence

By the way: In some examples above, the words unit was used, but sentences and paragraphs would have been valid alternatives.

Last but not least, three specifiers exist to filter results depending on the position of a hit:

  • at start expects tokens to occur at the beginning of a text
  • at end expects tokens to occur at the text end
  • entire content only accepts texts which have no other words at the beginning or end

Match Options[edit]

As indicated in the introduction, the input and query texts are tokenized before they are compared with each other. During this process, texts are split into tokens, which are then normalized, based on the following matching options:

  • If case is insensitive, no distinction is made between characters in upper and lower case. By default, the option is insensitive; it can also be set to sensitive:
"Respect Upper Case" contains text "Upper" using case sensitive
  • If diacritics is insensitive, characters with and without diacritics (umlauts, characters with accents) are declared as identical. By default, the option is insensitive; it can also be set to sensitive:
"'Äpfel' will not be found..." contains text "Apfel" using diacritics sensitive
  • If stemming is activated, words are shortened to a base form by a language-specific stemmer:
"catch" contains text "catches" using stemming
  • With the stop words option, a list of words can be defined that will be ignored when tokenizing a string. This is particularly helpful if the full-text index takes too much space (a standard stopword list for English texts is provided in the directory etc/stopwords.txt in the full distributions of BaseX, and available online at http://files.basex.org/etc/stopwords.txt):
"You and me" contains text "you or me" using stop words ("and", "or"),
"You and me" contains text "you or me" using stop words at "http://files.basex.org/etc/stopwords.txt"
  • Related terms such as synonyms can be found with the sophisticated Thesaurus option.

The wildcards option facilitates search operations similar to simple regular expressions:

  • . matches a single arbitrary character.
  • .? matches either zero or one character.
  • .* matches zero or more characters.
  • .+ matches one or more characters.
  • .{min,max} matches minmax number of characters.
"This may be interesting in the year 2000" contains text { "interest.*", "2.{3,3}" } using wildcards

This was a quick introduction to XQuery Full Text; you are invited to explore the numerous other features of the language!

BaseX Features[edit]

Languages[edit]

The chosen language determines how strings will be tokenized and stemmed. Either names (e.g. English, German) or codes (en, de) can be specified. A list of all language codes that are available on your system can be retrieved as follows:

declare namespace locale = "java:java.util.Locale";
distinct-values(locale:getAvailableLocales() ! locale:getLanguage(.))

By default, unless the languages codes ja, ar, ko, th, or zh are specified, a tokenizer for Western texts is used:

  • Whitespaces are interpreted as token delimiters.
  • Sentence delimiters are ., !, and ?.
  • Paragraph delimiters are newlines (&#xa;).

The basic JAR file of BaseX comes with built-in stemming support for English, German, Greek and Indonesian. Some more languages are supported if the following libraries are found in the classpath:

  • lucene-stemmers-3.4.0.jar includes the Snowball and Lucene stemmers for the following languages: Arabic, Bulgarian, Catalan, Czech, Danish, Dutch, Finnish, French, Hindi, Hungarian, Italian, Latvian, Lithuanian, Norwegian, Portuguese, Romanian, Russian, Spanish, Swedish, Turkish.

The JAR files are included in the ZIP and EXE distributions of BaseX.

The following two queries, which both return true, demonstrate that stemming depends on the selected language:

"Indexing" contains text "index" using stemming,
"häuser" contains text "haus" using stemming using language "German"

Scoring[edit]

The XQuery Full Text Recommendation allows for the usage of scoring models and values within queries, with scoring being completely implementation-defined.

The scoring model of BaseX takes into consideration the number of found terms, their frequency in a text, and the length of a text. The shorter the input text is, the higher scores will be:

(: Score values: 1 0.62 0.45 :)
for $text in ("A", "A B", "A B C")
let score $score := $text contains text "A"
order by $score descending
return <hit score='{ format-number($score, "0.00") }'>{ $text }</hit>

This simple approach has proven to consistently deliver good results, in particular when little is known about the structure of the queried XML documents.

Scoring values can be further processed to compute custom values:

let $terms := ('a', 'b')
let $scores := ft:score($terms ! ('a b c' contains text { . }))
return avg($scores)

Scoring is supported within full-text expressions, by ft:search, and by simple predicate tests that can be rewritten to ft:search:

let $string := 'a b'
return ft:score($string contains text 'a' ftand 'b'),

for $n score $s in ft:search('factbook', 'orthodox')
order by $s descending
return $s || ': ' || $n,

for $n score $s in db:get('factbook')//text()[. contains text 'orthodox']
order by $s descending
return $s || ': ' || $n

Thesaurus[edit]

One or more thesaurus files can be specified in a full-text expression. The following query returns false:

'hardware' contains text 'computers'
  using thesaurus default

If a thesaurus is employed…

<thesaurus xmlns="http://www.w3.org/2007/xqftts/thesaurus">
  <entry>
    <term>computers</term>
    <synonym>
      <term>hardware</term>
      <relationship>NT</relationship>
    </synonym>
  </entry>
</thesaurus>

…the result will be true:

'hardware' contains text 'computers'
  using thesaurus at 'thesaurus.xml'

Thesaurus files must comply with the XSD Schema of the XQFT Test Suite (but the namespace can be omitted). Apart from the relationship defined in ISO 2788 (NT: narrower team, RT: related term, etc.), custom relationships can be used.

The type of relationship and the level depth can be specified as well:

(: BT: find broader terms; NT means narrower term :)
'computers' contains text 'hardware'
  using thesaurus at 'x.xml' relationship 'BT' from 1 to 10 levels

More details can be found in the specification.

Fuzzy Querying[edit]

In addition to the official recommendation, BaseX supports a fuzzy search feature. The XQFT grammar was enhanced by the fuzzy match option to allow for approximate results in full texts:

Document 'doc.xml':

<doc>
   <a>house</a>
   <a>hous</a>
   <a>haus</a>
</doc>

Query:

//a[text() contains text 'house' using fuzzy]

Result:

<a>house</a>
<a>hous</a>

Fuzzy search is based on the Levenshtein distance. The maximum number of allowed errors is calculated by dividing the token length of a specified query term by 4. The query above yields two results as there is no error between the query term “house” and the text node “house”, and one error between “house” and “hous”.

A user-defined value can be adjusted globally via the LSERROR option or via an additional argument:

//a[text() contains text 'house' using fuzzy 3 errors]

Mixed Content[edit]

When working with so-called narrative XML documents, such as HTML, TEI, or DocBook documents, you typically have mixed content, i.e., elements containing a mix of text and markup, such as:

<p>This is only an illustrative <hi>example</hi>, not a <q>real</q> text.</p>

Since the logical flow of the text is not interrupted by the child elements, you will typically want to search across elements, so that the above paragraph would match a search for “real text”. For more examples, see XQuery and XPath Full Text 1.0 Use Cases.

To enable this kind of searches, it is recommendable to:

  • Keep whitespace stripping turned off when importing XML documents. This can be done by ensuring that STRIPWS is disabled. This can also be done in the GUI if a new database is created (DatabaseNew…ParsingStrip Whitespaces).
  • Keep automatic indentation turned off. Ensure that the serialization parameter indent is set to no.

A query such as //p[. contains text 'real text'] will then match the example paragraph above. However, the full-text index will not be used in this query, so it may take a long time. The full-text index would be used for the query //p[text() contains text 'real text'], but this query will not find the example paragraph because the matching text is split over two text nodes.

Note that the node structure is ignored by the full-text tokenizer: The contains text expression applies all full-text operations to the string value of its left operand. As a consequence, the ft:mark and ft:extract functions will only yield useful results if they are applied to single text nodes, as the following example demonstrates:

(: Structure is ignored; no highlighting: :)
ft:mark(//p[. contains text 'real'])
(: Single text nodes are addressed: results will be highlighted: :)
ft:mark(//p[.//text() contains text 'real'])

BaseX does not support the ignore option (without content) of the W3C XQuery Full Text 1.0 Recommendation. If you want to ignore descendant element content, such as footnotes or other material that does not belong to the same logical text flow, you can build a second database from and exclude all information you want to avoid searching for. See the following example (visit XQuery Update to learn more about updates):

let $docs := db:get('docs')
return db:create(
  'index-db',
  $docs update delete node (
    .//footnote
  ),
  $docs/db:path(.),
  map { 'ftindex': true() }
)

Functions[edit]

Some additional Full-Text Functions have been added to BaseX to extend the official language recommendation with useful features, such as explicitly requesting the score value of an item, marking the hits of a full-text request, or directly accessing the full-text index with the default index options.

Collations[edit]

See XQuery 3.1 for standard collation features.

By default, string comparisons in XQuery are based on the Unicode codepoint order. The default namespace URI http://www.w3.org/2003/05/xpath-functions/collation/codepoint specifies this ordering. In BaseX, the following URI syntax is supported to specify collations:

 http://basex.org/collation?lang=...;strength=...;decomposition=...

Semicolons can be replaced with ampersands; for convenience, the URL can be reduced to its query string component (including the question mark). All arguments are optional:

Argument Description
lang A language code, selecting a Locale. It may be followed by a language variant. If no language is specified, the system’s default will be chosen. Examples: de, en-US.
strength Level of difference considered significant in comparisons. Four strengths are supported: primary, secondary, tertiary, and identical. As an example, in German:
decomposition Defines how composed characters are handled. Three decompositions are supported: none, standard, and full. More details are found in the JavaDoc of the JDK.

Some Examples:

  • If a default collation is specified, it applies to all collation-dependent string operations in the query. The following expression yields true:
declare default collation 'http://basex.org/collation?lang=de;strength=secondary';
'Straße' = 'Strasse'
  • Collations can also be specified in order by and group by clauses of FLWOR expressions. This query returns à plutôt! bonjour!:
for $w in ("bonjour!", "à plutôt!") order by $w collation "?lang=fr" return $w
  • Various string function exists that take an optional collation as argument: The following functions give us a and 1 2 3 as results:
distinct-values(("a", "á", "à"), "?lang=it-IT;strength=primary"),
index-of(("a", "á", "à"), "a", "?lang=it-IT;strength=primary")

If the ICU Library is added to the classpath, the full Unicode Collation Algorithm features become available:

(: returns 0 (both strings are compared as equal) :)
compare('a-b', 'ab', 'http://www.w3.org/2013/collation/UCA?alternate=shifted')

Changelog[edit]

Version 9.6
Version 9.5
  • Removed: Scoring propagation.
Version 9.2
  • Added: Arabic stemmer.
Version 8.0
  • Updated: Scores will be propagated by the and and or expressions and in predicates.
Version 7.7
Version 7.3
  • Removed: Trie index, which was specialized on wildcard queries. The fuzzy index now supports both wildcard and fuzzy queries.
  • Removed: TF/IDF scoring was discarded in favor of the internal scoring model.