Main Page » XQuery » Full-Text

Full-Text

This article summarizes the features of the W3C XQuery Full Text Recommendation, and custom features of the implementation in BaseX.

Please read the separate Full-Text Index section in our documentation if you want to learn how to evaluate full-text requests on large databases within milliseconds.

Introduction

The XQuery and XPath Full Text Recommendation (XQFT) is a feature-rich extension of the XQuery language. It can be used to both query XML documents and single strings for words and phrases. BaseX was the first query processor that supported all features of the specification.

This section gives you a quick insight into the most important features of the language.

This is a simple example for a basic full-text expression:

"This is YOUR World" contains text "your world"

It yields true, because the search string is tokenized before it is compared with the tokenized input string. In the tokenization process, several normalizations take place. Many of those steps can hardly be simulated with plain XQuery: as an example, upper/lower case and diacritics (umlauts, accents, etc.) are removed and an optional, language-dependent stemming algorithm is applied. Beside that, special characters such as whitespace and punctuation marks will be ignored. Thus, this query also yields true:

"Well... Done!" contains text "well, done"

The occurs keyword comes into play when more than one occurrence of a token is to be found:

"one and two and three" contains text "and" occurs at least 2 times

Various range modifiers are available: exactly, at least, at most, and from ... to ....

Combining Results

In the given example, curly braces are used to combine multiple keywords:

for $country in doc('factbook')//country
where $country//religions[text() contains text { 'Sunni', 'Shia' } any]
return $country/name

The query will output the names of all countries with a religion element containing sunni or shia. The any keyword is optional; it can be replaced with:

  • all: all strings need to be found
  • any word: any of the single words within the specified strings need to be found
  • all words: all single words within the specified strings need to be found
  • phrase: all strings need to be found as a single phrase

The keywords ftand, ftor and ftnot can also be used to combine multiple query terms. The following query yields the same result as the last one does:

doc('factbook')//country[descendant::religions
  contains text 'sunni' ftor 'shia']/name

The keywords not in are special: they are used to find tokens which are not part of a longer token sequence:

for $text in ("New York", "new conditions")
return $text contains text "New" not in "New York"

Due to the complex data model of the XQuery Full Text spec, the usage of ftand may lead to a high memory consumption. If you should encounter problems, simply use the all keyword:

doc('factbook')//country[descendant::religions
  contains text { 'Christian', 'Jewish' } all]/name

Positional Filters

A popular retrieval operation is to filter texts by the distance of the searched words. In this query…

<xml>
  <text>There is some reason why ...</text>
  <text>For some good yet unknown reason, ...</text>
  <text>The reason why some people ...</text>
</xml>//text[. contains text { "some", "reason" } all ordered distance at most 3 words]
…the two first texts will be returned as result, because there are at most three words between some and reason. Additionally, the ordered keyword ensures that the words are found in the specified order, which is why the third text is excluded. Note that all is required here to guarantee that only those hits will be accepted that contain all searched words.

The window keyword is related: it accepts those texts in which all keywords occur within the specified number of tokens. Can you guess what is returned by the following query?

("A C D", "A B C D E")[. contains text { "A", "E" } all window 3 words]

Sometimes it is interesting to only select texts in which all searched terms occur in the same sentence or paragraph (you can even filter for different sentences/paragraphs). This is obviously not the case in the following example:

'Mary told me, “I will survive!”.' contains text { 'will', 'told' } all words same sentence

By the way: In some examples above, the words unit was used, but sentences and paragraphs would have been valid alternatives.

Last but not least, three specifiers exist to filter results depending on the position of a hit:

  • at start expects tokens to occur at the beginning of a text
  • at end expects tokens to occur at the text end
  • entire content only accepts texts which have no other words at the beginning or end

Match Options

As indicated in the introduction, the input and query texts are tokenized before they are compared with each other. During this process, texts are split into tokens, which are then normalized, based on the following matching options:

  • If case is insensitive, no distinction is made between characters in upper and lower case. By default, the option is insensitive; it can also be set to sensitive:
"Respect Upper Case" contains text "Upper" using case sensitive
  • If diacritics is insensitive, characters with and without diacritics (umlauts, characters with accents) are declared as identical. By default, the option is insensitive; it can also be set to sensitive:
"'Äpfel' will not be found..." contains text "Apfel" using diacritics sensitive
  • If stemming is activated, words are shortened to a base form by a language-specific stemmer:
"catch" contains text "catches" using stemming
  • With the stop words option, a list of words can be defined that will be ignored when tokenizing a string. This is particularly helpful if the full-text index takes too much space (a standard stopword list for English texts is provided in the directory etc/stopwords.txt in the full distributions of BaseX, and available online at https://files.basex.org/etc/stopwords.txt:
"You and me" contains text "you or me" using stop words ("and", "or"),
"You and me" contains text "you or me" using stop words at
  "https://files.basex.org/etc/stopwords.txt"
  • Related terms such as synonyms can be found with the sophisticated Thesaurus option.

The wildcards option facilitates search operations similar to simple regular expressions:

  • . matches a single arbitrary character.
  • .? matches either zero or one character.
  • .* matches zero or more characters.
  • .+ matches one or more characters.
  • .{min,max} matches minmax number of characters.
"This may be interesting in the year 2000"
  contains text { "interest.*", "2.{3,3}" } using wildcards

This was a quick introduction to XQuery Full Text; you are invited to explore the numerous other features of the language!

BaseX Features

Languages

The chosen language determines how strings will be tokenized and stemmed. Either names (e.g. English, German) or codes (en, de) can be specified.

A list of all language codes that are available on your system can be retrieved as follows:

declare namespace locale = "java:java.util.Locale";
distinct-values(locale:getAvailableLocales() ! locale:getLanguage(.))

By default, unless the languages codes ja, ar, ko, th, or zh are specified, a tokenizer for Western texts is used:

  • Whitespace is interpreted as token delimiters.
  • Sentence delimiters are ., !, and ?.
  • Paragraph delimiters are newlines (&#xa;).

The basic JAR file of BaseX comes with built-in stemming support for English, German, Greek and Indonesian. Some more languages are supported if the following libraries are found in the classpath:

  • lucene-stemmers-3.4.0.jar includes the Snowball and Lucene stemmers for the following languages: Arabic, Bulgarian, Catalan, Czech, Danish, Dutch, Finnish, French, Hindi, Hungarian, Italian, Latvian, Lithuanian, Norwegian, Portuguese, Romanian, Russian, Spanish, Swedish, Turkish.
  • igo-0.4.3.jar: Full-Text Japanese explains how Igo can be integrated, and how Japanese texts are tokenized and stemmed.

The JAR files are included in the ZIP and EXE distributions of BaseX.

The following two queries, which both return true, demonstrate that stemming depends on the selected language:

"Indexing" contains text "index" using stemming,
"häuser" contains text "haus" using stemming using language "German"

Scoring

The XQuery Full Text Recommendation allows for the usage of scoring models and values within queries, with scoring being completely implementation-defined.

The scoring model of BaseX takes into consideration the number of found terms, their frequency in a text, and the length of a text. The shorter the input text is, the higher scores will be:

(: Score values: 1 0.62 0.45 :)
for $text in ("A", "A B", "A B C")
let score $score := $text contains text "A"
order by $score descending
return <hit score='{ format-number($score, "0.00") }'>{ $text }</hit>

This simple approach has proven to consistently deliver good results, in particular when little is known about the structure of the queried XML documents.

Scoring values can be further processed to compute custom values:

let $terms := ('a', 'b')
let $scores := ft:score($terms ! ('a b c' contains text { . }))
return avg($scores)

Scoring is supported within full-text expressions, by ft:search, and by simple predicate tests that can be rewritten to ft:search:

let $string := 'a b'
return ft:score($string contains text 'a' ftand 'b'),

for $n score $s in ft:search('factbook', 'orthodox')
order by $s descending
return $s || ': ' || $n,

for $n score $s in db:get('factbook')//text()[. contains text 'orthodox']
order by $s descending
return $s || ': ' || $n

Thesaurus

One or more thesaurus files can be specified in a full-text expression. The following query returns false:

'hardware' contains text 'computers' using thesaurus default

If a thesaurus is employed…

<thesaurus xmlns="http://www.w3.org/2007/xqftts/thesaurus">
  <entry>
    <term>computers</term>
    <synonym>
      <term>hardware</term>
      <relationship>NT</relationship>
    </synonym>
  </entry>
</thesaurus>
…the result will be true:
'hardware' contains text 'computers' using thesaurus at 'thesaurus.xml'

Thesaurus files must comply with the XSD Schema of the XQFT Test Suite (but the namespace can be omitted). Apart from the relationship defined in ISO 2788 (NT: narrower term, RT: related term, etc.), custom relationships can be used.

The type of relationship and the level depth can be specified as well:

(: BT: find broader terms; NT means narrower term :)
'computers' contains text 'hardware'
  using thesaurus at 'x.xml' relationship 'BT' from 1 to 10 levels

More details can be found in the specification.

Fuzzy Querying

In addition to the official recommendation, BaseX supports a fuzzy search feature. The XQFT grammar was enhanced by the fuzzy match option to allow for approximate results in full texts:

Document 'doc.xml':
<doc>
   <a>house</a>
   <a>hous</a>
   <a>haus</a>
</doc>
Query:
//a[text() contains text 'house' using fuzzy]
Result:
<a>house</a>
<a>hous</a>

Fuzzy search is based on the Levenshtein distance. The maximum number of allowed errors is calculated by dividing the token length of a specified query term by 4. The query above yields two results as there is no error between the query term “house” and the text node “house”, and one error between “house” and “hous”.

A user-defined value can be adjusted globally via the LSERROR option or via an additional argument:

//a[text() contains text 'house' using fuzzy 3 errors]

Mixed Content

When working with so-called narrative XML documents, such as HTML, TEI, or DocBook documents, you typically have mixed content, i.e., elements containing a mix of text and markup, such as:

<p>This is only an illustrative <hi>example</hi>, not a <q>real</q> text.</p>

Since the logical flow of the text is interrupted by the child elements, you will typically want to search across elements, so that the above paragraph would match a search for “real text”. For more examples, see XQuery and XPath Full Text 1.0 Use Cases.

To enable this kind of search, it is recommendable to:

  • Keep whitespace stripping turned off when importing XML documents. This can be done by ensuring that STRIPWS is disabled. This can also be done in the GUI if a new database is created (DatabaseNew…ParsingStrip Whitespace).
  • Keep automatic indentation turned off. Ensure that the serialization parameter indent is set to no.

A query such as //p[. contains text 'real text'] will then match the example paragraph above. However, the full-text index will not be used in this query, so it may take a long time. The full-text index would be used for the query //p[text() contains text 'real text'], but this query will not find the example paragraph because the matching text is split over two text nodes.

Note that the node structure is ignored by the full-text tokenizer: The contains text expression applies all full-text operations to the string value of its left operand. As a consequence, the ft:mark and ft:extract functions will only yield useful results if they are applied to single text nodes, as the following example demonstrates:

(: Structure is ignored; no highlighting: :)
ft:mark(//p[. contains text 'real'])
(: Single text nodes are addressed: results will be highlighted: :)
ft:mark(//p[.//text() contains text 'real'])

BaseX does not support the ignore option (without content) of the W3C XQuery Full Text 1.0 Recommendation. If you want to ignore descendant element content, such as footnotes or other material that does not belong to the same logical text flow, you can build a second database from and exclude all information you want to avoid searching for. See the following example (visit Updates for more details):

let $docs := db:get('docs')
return db:create(
  'index-db',
  $docs update delete node (
    .//footnote
  ),
  $docs/db:path(.),
  { 'ftindex': true() }
)

Functions

Some additional Full-Text Functions have been added to BaseX to extend the official language recommendation with useful features, such as explicitly requesting the score value of an item, marking the hits of a full-text request, or directly accessing the full-text index with the default index options.

Collations

See XQuery 3.1 for standard collation features.

By default, string comparisons in XQuery are based on the Unicode codepoint order. The default namespace URI http://www.w3.org/2003/05/xpath-functions/collation/codepoint specifies this ordering. In BaseX, the following URI syntax is supported to specify collations:

http://basex.org/collation?lang=...;strength=...;decomposition=...

Semicolons can be replaced with ampersands; for convenience, the URL can be reduced to its query string component (including the question mark). All arguments are optional:

Argument Description
lang A language code, selecting a Locale. It may be followed by a language variant. If no language is specified, the system’s default will be chosen. Examples: de, en-US.
strength Level of difference considered significant in comparisons. Four strengths are supported: primary, secondary, tertiary, and identical. As an example, in German:
decomposition Defines how composed characters are handled. Three decompositions are supported: none, standard, and full. More details are found in the JavaDoc of the JDK.
Examples:
  • If a default collation is specified, it applies to all collation-dependent string operations in the query. The following expression yields true:
declare default collation 'http://basex.org/collation?lang=de;strength=secondary';
'Straße' = 'Strasse'
  • Collations can also be specified in order by and group by clauses of FLWOR expressions. This query returns à plutôt! bonjour!:
for $w in ("bonjour!", "à plutôt!") order by $w collation "?lang=fr" return $w
  • Various string function exists that take an optional collation as argument: The following functions give us a and 1 2 3 as results:
distinct-values(("a", "á", "à"), "?lang=it-IT;strength=primary"),
index-of(("a", "á", "à"), "a", "?lang=it-IT;strength=primary")

If the ICU Library is added to the classpath, the full Unicode Collation Algorithm features become available:

(: returns 0 (both strings are compared as equal) :)
compare('a-b', 'ab', 'http://www.w3.org/2013/collation/UCA?alternate=shifted')

Changelog

Version 9.6Version 9.5
  • Removed: Scoring propagation.
Version 9.2
  • Added: Arabic stemmer.
Version 8.0
  • Updated: Scoring will be propagated by the and and or expressions and in predicates.
Version 7.7Version 7.3
  • Removed: Trie index, which was specialized on wildcard queries. The fuzzy index now supports both wildcard and fuzzy queries.
  • Removed: TF/IDF scoring was discarded in favor of the internal scoring model.

⚡Generated with XQuery